Auto process Cube with SQL Agent job
(1) Connect to the Analysis server, select the database which we want it to be automatically processed. Right click on this database, choose ‘Process’:
(2) In the opening ‘Process database’ form, click the ‘Script Action to New Query Window’ as below:
(3) A query form is opened as below, copy all these code.
(4) Connect to the Database Engine, choose the Jobs node under ‘SQL Server Agent’, right click on this ‘Jobs’ node and select ‘New Job’:
(5) In the opening ‘New Job’ form, enter a Name and description for this job
(6) Choose Steps in the left page in the ‘new Job’ form, then click the ‘New’ button to open ‘New Job Setup’ form
(7) In the ‘New Job Setup’ form, please enter a name for this Step, choose the Type ‘SQL Server Analysis Services Command’, enter the Server, then paste the command in our step(3) in to the Command, and you can select the ‘Advanced’ page in the left page to configure more for this step, like log file.
After you finish the setup of step, click OK return to ‘New Job’ form.
(8) Choose ‘schedules’ and then click ‘New’ buttons as below:
(9) In the ‘New Job Schedule’ form, enter a name for this schedule, set the frequency for this job, and set the daily frequency as below, finally, click OK.
(10) Click OK to finish the creation of the job, and it has been added to the agent, it will run the process every day.
相关文档:
alter procedure qry_page
@sqlstr nvarchar(4000), --查询字符串
@page int, --第N页
@pagesize int &n ......
SELECT sysobjects.name,syscolumns.name
from sysobjects,syscolumns
WHERE(sysobjects.id=syscolumns.id)
select col_name(OBJECT_ID('staff'),17)
select name
from syscolumns
where id=object_id('你的表名'); ......
--第一步
--在master库中建立一个备份数据库的存储过程.
USE master
GO
CREATE PROC p
@db_name sysname, --数据库名
@bk_path NVARCHAR(1024) --备份文件的路径
A ......
1.查找重复数据表的id以及重复数据的条数
select max(id) as nid,count(id) as 重复条数 from tableName
group by linkname Having Count(*) > 1
2.查找重复数据表的主键
select max(id) as nid from tableName
group by linkname Having Count(id) > 1
3.删除重复的数据
delete from table ......