Friday, December 7, 2012

Task 20: NoSQL vs SQL


SQL vs NoSQL, they’re actually talking about relational versus non-relational databases. That’s what I’m talking about here: Relational databases (such as Oracle, MySQL, and SQL Server) versus newer non-relational databases (such as MongoDB, CouchDB, BigTable, and others).



Popular open source NoSQL data stores include:
  • Cassandra (tabular, written in Java, used by Cisco WebEx, Digg, Facebook, IBM, Mahalo, Rackspace, Reddit and Twitter)
  • CouchDB (document, written in Erlang, used by BBC and Engine Yard)
  • Dynomite (key-value, written in Erlang, used by Powerset)
  • HBase (key-value, written in Java, used by Bing)
  • Hypertable (tabular, written in C++, used by Baidu)
  • Kai (key-value, written in Erlang)
  • MemcacheDB (key-value, written in C, used by Reddit)
  • MongoDB (document, written in C++, used by Electronic Arts, Github, NY Times and Sourceforge)
  • Neo4j (graph, written in Java, used by some Swedish universities)
  • Project Voldemort (key-value, written in Java, used by LinkedIn)
  • Redis (key-value, written in C, used by Craigslist, Engine Yard and Github)
  • Riak (key-value, written in Erlang, used by Comcast and Mochi Media)
  • Ringo (key-value, written in Erlang, used by Nokia)
  • Scalaris (key-value, written in Erlang, used by OnScale)
  • Terrastore (document, written in Java)
  • ThruDB (document, written in C++, used by JunkDepot.com)
  • Tokyo Cabinet/Tokyo Tyrant (key-value, written in C, used by Mixi.jp (Japanese social networking site))


db4o (database for objects) is an embeddable open source object database for Java and .NET developers. It is developed, commercially licensed and supported by Versant.
db4o is written in Java and .NET and provides the respective APIs. db4o can run on any operating system that supports Java or .NET. 




Thursday, November 29, 2012

Task 19: Update/Insert procedure

                                                            

SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[spUpdateInsert]
AS
SET NOCOUNT ON; 
BEGIN TRY
BEGIN TRANSACTION
DECLARE @ERROR_MESSAGE NVARCHAR(100) = ERROR_MESSAGE();
DECLARE @ERROR_SEVERITY INT = ERROR_SEVERITY();


UPDATE ZIP
SET 
ZIP.[ZipCode] = ZC.[PostalCode],
ZIP.[CityID] = C.[ID],
ZIP.[StateID] = S.[ID],
ZIP.[County] = ZC.[County]
FROM dbo.ZipCode ZIP
INNER JOIN [New].dbo.tblZipCode ZC ON ZIP.[ZipCode] = ZC.[PostalCode]
LEFT JOIN [QA].dbo.[city] c on c.name = zc.city
LEFT JOIN [QA].dbo.[state] s on s.name = zc.[State]
--------------------------------------------------------------

SET IDENTITY_INSERT dbo.ZipCode ON
INSERT INTO dbo.ZipCode(
[ID],
[ZipCode],
[CityID],
[StateID],
[County]
)
SELECT 
ZC.[PostalCodeID],
ZC.[PostalCode],
C.[ID] AS CityID,
S.[ID] AS StateID,
ZC.[County]
FROM dbo.ZipCode ZIP
RIGHT JOIN [New].dbo.tblZipCode ZC ON ZIP.[ZipCode] = ZC.[PostalCode]
LEFT JOIN [QA].dbo.[city] c on c.name = zc.city
LEFT JOIN [QA].dbo.[state] s on s.name = zc.[State]
WHERE ZIP.[ID] is null

SET IDENTITY_INSERT dbo.ZipCode OFF


--SELECT * FROM [New].dbo.tblZipCode

--SELECT * FROM [QA].dbo.ZipCode

--------------------------------------------------------------

UPDATE ZIP
SET 
ZIP.[CityID] = C.[ID],
ZIP.[StateID] = S.[ID]
 
FROM dbo.ZipCode ZIP
INNER JOIN [DieselDirect_New].dbo.tblZipCode ZC ON ZIP.[ZipCode] = ZC.[PostalCode]
LEFT JOIN [DDBillingPortal_QA].dbo.[city] c on c.name = zc.city
LEFT JOIN [DDBillingPortal_QA].dbo.[state] s on s.name = zc.[State]

COMMIT TRAN -- Transaction Success!

END TRY

BEGIN CATCH
    IF @@TRANCOUNT > 0
        ROLLBACK TRAN --RollBack in case of Error
   RAISERROR(@ERROR_MESSAGE,@ERROR_SEVERITY, 1)
END CATCH



Wednesday, October 24, 2012

Task 18: SQL Profiler





     SQL Profiler is the graphical tool that lets you capture and analyzer SQL Server events in real time. I'm going to drill a bit deeper into this useful tool and show you an interesting book by Brad McGehee. You can download it from here.

  • Chapter 01: Getting Started with Profiler
  • Chapter 02: Working with Traces and Templates
  • Chapter 03: Profiler GUI Tips and Tricks
  • Chapter 04: How to Identify Slow Running Queries
  • Chapter 05: How to Identify and Troubleshoot SQL Server Problems
  • Chapter 06: Using Profiler to Audit Database Activity
  • Chapter 07: Using Profiler with the Database Engine Tuning Advisor
  • Chapter 08: Correlating Profiler with Performance Monitor
  • Chapter 09: How to Capture Profiler Traces Programmatically
  • Chapter 10: Profiler Best Practices
  • Chapter 11: Profiler Events and Data Columns Explained

You should also check SQL Profiler Tips and Tricks and SQL Profiler Best Practices

Keyboard Shortcuts

  • Ctrl+Shift+Delete to clear the current trace window
  • Ctrl+F to open the Find dialog box
  • F3 to find the next match
  • Shift+F3 to find the previous match
  • Ctrl+N to open a new trace
  • F5 to start a replay
  • Shift+F5 to stop a replay
  • F9 to toggle a breakpoint
  • F10 to single-step
  • Ctrl+F10 to run to the cursor

A BETTER SQL SERVER PROFILER ALTERNATIVE is accuprofiler  - http://www.accuprofiler.com/




Tuesday, October 23, 2012

Task 17: SSIS with variable excel connection manager



1. In order to deploy an Excel Source with .xlsx extension you need to install the following provider:
http://www.microsoft.com/en-us/download/details.aspx?id=13255

2. Create a new SSIS project (Integration Services Project1)

3. Create a new package (LoadCostChanges.dtsx)

4. Declare 3 variables: UserID, FilePath and VendorUploadID (the variables will be sent as parameters from a stored procedure to the package)



5. Add from toolbox a DataFlow component (Import the Excel File) and an Execute SQL Task.


6. Because the excel file can have multiple empty rows we'll need to delete them using an Execute SQL Task (but this will be the last step for the implementation)




7.  In Data Flow component choose an Excel Source, a Derived Column and an OLE DB Destination component from Toolbox.


8. Excel Source configuration:


9. Excel Connection Manger configuration


10.  Derived Column configuration


11. OLE DB Destination configuration


12. Excel Connection Manager configuration
Connection String:  "Provider=Microsoft.ACE.OLEDB.12.0;Data Source="+ @[User::FilePath] + ";Extended Properties=\"EXCEL 12.0;HDR=YES\";"


13. The dbo.LoadExcel must be created with all the columns from all the excel files if there are columns which can be mapped in more than one Excel Source. 

CREATE TABLE [dbo].[LoadExcel](
[LoadExcelID] [bigint] IDENTITY(1,1) NOT NULL,
[VendorUploadID] [int] NULL,
        ............................................
        [UserID] [uniqueidentifier] NULL,
 CONSTRAINT [PK_LoadExcel] PRIMARY KEY CLUSTERED 
(
[LoadExcelID] ASC
)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
) ON [PRIMARY]
GO


14. You need to connect to your integration server and deploy all your packages in SQL Server:


15. The Stored procedure which call the packages and sent them parameters : 

USE [TFMVendorPortal]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
-----------------------------------------------------------------
--settings before running for the first time xp_cmdshell command
-----------------------------------------------------------------
/*
sp_configure 'Show Advanced Options', 1;
RECONFIGURE;
GO
sp_configure 'Ad Hoc Distributed Queries', 1;
RECONFIGURE;
GO
EXEC sp_configure 'show advanced options', 1 
GO
RECONFIGURE WITH OVERRIDE 
GO
EXEC sp_configure 'xp_cmdshell', 1 
GO
RECONFIGURE WITH OVERRIDE 
GO
EXEC sp_configure 'show advanced options', 0 
GO
*/

ALTER PROCEDURE [dbo].[tfm_RunSSISPackages] 
@VendorUploadId int  ,@FilePath varchar(500), @UserID UNIQUEIDENTIFIER
AS
BEGIN

DECLARE @SQL NVARCHAR(max)

 ----TEST SCRIPT 
--DECLARE @SQL NVARCHAR(MAX)
--declare @FilePath nvarchar(max)
--declare @VendorUploadID INT
--SET @VendorUploadId = 233
--set @FilePath = '\\192.168.1.3\tfm\7\vernon_CostChanges_2012-10-15_17-13-33-718.xlsx'

set @SQL = '
EXEC master..xp_cmdshell ''""C:\Program Files (x86)\Microsoft SQL Server\100\DTS\Binn\DTExec.exe" /DTS "\File System\LoadCostChanges" /SERVER "SFVM05" /set \package.variables[User::VendorUploadID].Value;"'+ cast(@VendorUploadID as nvarchar(max))+ '" /set \package.variables[User::FilePath].Value;"'+ @FilePath + '""'''
exec (@SQL)

END




Tuesday, October 16, 2012

Task 16. BCP utility

Bulk Copy Program is a tool used for copying data between servers, especially when you don't have a Business Intelligence Development Studio installed on the server from where you want to took the data.
You can find more about other methods of doing this here.

The task was to copy the databases size records from one server to another one in order to process the data. To obtain the databases size from one server I used the following query:

SELECT DB_NAME(database_id) AS DatabaseName,
Name AS Logical_Name,
Physical_Name, (size*8)/1024 SizeMB
FROM sys.master_files
where database_id >4 --skip system databases
GO

or you can use:

with fs
as (
    select database_id, type, size * 8.0 / 1024 size
    from sys.master_files)
select 
    name,
    (select sum(size) from fs where type = 0 and fs.database_id = db.database_id) DataFileSizeMB,
    (select sum(size) from fs where type = 1 and fs.database_id = db.database_id) LogFileSizeMB
from sys.databases db

The steps are:
1. Create a .bat file to run the bcp command using one of the mentioned queries.
2. Go to D:\ and create a New Text Document named run_bcp.bat .
3.Open your .bat file using Notepad/Notepad++ and write down the following commands:

rem D:\bcp_outputQuery.csv
bcp "SELECT DB_NAME(database_id) AS DatabaseName,Name AS Logical_Name,Physical_Name, (size*8)/1024 SizeMB FROM sys.master_files where database_id > 4" queryout "D:\bcp_outputQuery.csv" -c -t"," -r"\n" -S <<Server IP>> -T

4.Using a Command Prompt window go to your .bat file and run it using  .\run_bcp.bat.  It will start copying the data in your CSV file.
5. Copy the CSV file on your server and export data using an SSIS package ( from .csv file to a table)


If you can connect remotely to the server where the data needs to arrive then it's more easy. You create a login on the server where data comes from with full rights and then using SSIS you'll be able to connect to the server.


Friday, September 28, 2012

Task 15: Delete Duplicate records


      Methods to remove duplicate records in SQL Server:

1. When we receive more than one rows of records, we need to remove the row which is not the first one and we can do this like below:


/* Delete Duplicate records */
WITH CTE (col1,col2,col3, DuplicateCount) 
AS 
(
SELECT col1,col2,col3, ROW_NUMBER() OVER (PARTITION BY col1,col2,col3 ORDER BY col1) AS DuplicateCount
FROM table_name 
)
DELETE FROM CTE 
WHERE DuplicateCount > 1


Task 14: Pagination



I’m sharing a pagination method that I used in my scripts with a Common Table Expression and the RowNumber function and this seems to be the more elegant method to display a number of lines per page. You can integrate the CTE in a stored procedure or add it as a dynamic script.


DECLARE @PageIndex INT = 1, @Pagesize INT = 10
;With CTEPage
AS
(
SELECT 
ROW_NUMBER() OVER(ORDER BY PageName,PageId) as RW,
PageId, 
PageName 
FROM PaginationTable
SELECT PageId, PageName FROM CTEPage
WHERE RW BETWEEN (@PageIndex - 1) * @Pagesize AND ((@PageIndex - 1) * @Pagesize) + @Pagesize
GO

The PageSize and PageIndex can also be added as parameters in a stored procedure:

CREATE PROCEDURE [dbo].[Report_pagination]
   @PageSize INT = 10, @Page INT =1
AS
SET NOCOUNT ON;
CREATE TABLE #temp ( ...)

;WITH PageNumbers AS
(
SELECT 
  columns,
          ROW_NUMBER() OVER (ORDER BY col_name) AS rownum

FROM #temp
 LEFT JOIN table_name
)

SELECT  *
FROM    PageNumbers
WHERE   rownum  BETWEEN ((@Page - 1) * @PageSize + 1)
        AND (@Page * @PageSize)
ORDER BY col_name



Sunday, September 9, 2012

Recent books


A lot of changes in the last time but here I am:

 
  I read about some interesting books based on SQL Server 2012\ BI and I taught it would be nice sharing them:


Monday, August 20, 2012

Task 13: OLAP vs OLTP


I read a lot about these differences between Transaction Processing and Analytical Processing but today I better clarified myself the concepts. The idea is a simple one:

We can divide IT systems into transactional (OLTP) and analytical (OLAP). In general we can assume that OLTP systems provide source data to data warehouses, whereas OLAP systems help to analyze it. 

  • Transaction is achieved by the concept called OLTP. 
  • Analysis is achieved by the concept called OLAP. 
Online Transaction Processing is driven by Operational needs and Online Analytical Processing is driven by Strategic needs. Collective OLAP is termed into one concept knows as BI. There are some OLTP applications like: reservation systems, point of sales, inventory control, automated tellers, manufacturing systems. The requirements are current data, rapid data update, high availability. As client/server systems: DB2, Oracle,Access, SQL Server, Departmental or small business data, normalized design. 
You need to know that OLTP systems are not easy to query because ad-hoc queries do bad things to transactional systems and its performance but the problem is solved by ETL process and OLAP. Data is extracted from various sources, the extracted data is transformed into related information for advanced BI processing and finally it is loaded into destinations as tables or materialized views.

These materialized views are database objects that contains the result of a query.Basically it is a process by which necessary data is collected from various remote tables and stored into one local area. The original data in the remote system is safe from changes because the materialized views are read-only and can also be synchronized periodically with the main data, thus keeping them updated. 

OLAP starts from where ETL process ends. It is a group of technologies and applications that collect, manage, process and present multidimensional data for analysis and management purposes, making use of the information from ETL processes output. OLAP effectively makes use of ETL process output which is mostly of materialized views. 

OLTP (On-line Transaction Processing) is characterized by a large number of short on-line transactions (INSERT, UPDATE, DELETE). The main emphasis for OLTP systems is put on very fast query processing, maintaining data integrity in multi-access environments and an effectiveness measured by number of transactions per second. In OLTP database there is detailed and current data, and schema used to store transactional databases is the entity model (usually 3NF). 

OLAP (On-line Analytical Processing) is characterized by relatively low volume of transactions. Queries are often very complex and involve aggregations. For OLAP systems a response time is an effectiveness measure. OLAP applications are widely used by Data Mining techniques. In OLAP database there is aggregated, historical data, stored in multi-dimensional schemas (usually star schema). 



Sunday, August 12, 2012

Task 16: Replication


                                                   Replication

Replication is the mechanism for creating and maintaining multiple copies of the same data. Replication allows:
• Multiple copies of data can be kept in sync. Load balancing. Replication allows you to disseminate your data to a number of servers and then distribute the query load among those servers.
• Allows data be closer to users, improving performance. Redundancy. Replication allows you to build a fail-over database server that’s ready to pick up the processing load at a moment’s notice.
• Allows a branch office to work when not connected to the main office. Offline processing. You may wish to manipulate data from your database on a machine that is not always connected to the network.
• Separate process and offload work from production servers

What are the Components of Replication

Following are the important components in replication setup
• A publication contains articles.
• The publisher either distributes the magazine directly or uses a distributor.
• Subscribers receive publications to which they have subscribed.

What are the Server Roles Involved in Replication ?

SQL Servers can be assigned three different roles in a replication topology:
• Publishers create and modify data. Publishers have data to offer to other servers. Any given replication scheme may have one or more publishers.
• Distributors store replication-specific data associated with one or more publishers.
• A subscriber holds a replica copy of the data. Subscribers are database servers that wish to receive updates from the Publisher when data is modified.

What are the type of replication available in SQL Server ?

SQL Server supports

Snapshot replication sends the entire data set to the subscriber. The publisher simply takes a snapshot of the entire replicated database and shares it with the subscribers. You don't use this replication type for databases that change frequently. 
Transactional replication only sends modifications to the data. The replication agent monitors the publisher for changes to the database and transmits those changes to the subscribers.
Merge replication items are modified at both the publisher and subscribers. Allows the publisher and subscriber to independently make changes to the database. Both entities can work without an active network connection. When they are reconnected, the merge replication agent checks for changes on both sets of data and modifies each database accordingly.

Here you can find how to configure the data replication in SQL Server 2008

You may wish to implement replication in your organization for a number of reasons, such as:
  • Load balancing. Replication allows you to disseminate your data to a number of servers and then distribute the query load among those servers.
  • Offline processing. You may wish to manipulate data from your database on a machine that is not always connected to the network.
  • Redundancy. Replication allows you to build a fail-over database server that’s ready to pick up the processing load at a moment’s notice.

Thursday, August 9, 2012

Task 15: Maintenance Plan



To create a MaintenancePlan is pretty easy because there is a wizard there and the tasks are already defined.


Example of MaintenancePlan created on the project I work for: 




Maintenance plans can be created to perform the following tasks:
  • Reorganize the data on the data and index pages by rebuilding indexes with a new fill factor. Rebuilding indexes with a new fill factor makes sure that database pages contain an equally distributed amount of data and free space. It also enables faster growth in the future. For more information, see Specify Fill Factor for an Index.
  • Compress data files by removing empty database pages.
  • Update index statistics to make sure the query optimizer has current information about the distribution of data values in the tables. This enables the query optimizer to make better judgments about the best way to access data, because it has more information about the data stored in the database. Although index statistics are automatically updated by SQL Server periodically, this option can force the statistics to update immediately.
  • Perform internal consistency checks of the data and data pages within the database to make sure that a system or software problem has not damaged data.
  • Back up the database and transaction log files. Database and log backups can be retained for a specified period. This lets you create a history of backups to be used if you have to restore the database to a time earlier than the last database backup. You can also perform differential backups.
  • Run SQL Server Agent jobs. This can be used to create jobs that perform a variety of actions and the maintenance plans to run those jobs.

Task 14: Indexes



   Here you can find a link that provide you with detailed information about indexes:

Index Overview, Index Architecture, Clustered Indexes, Nonclustered Indexes, Covering Indexes, How Indexes Improve Performance, Index Uniqueness, Data Pages & Extents, How Indexes Are Stored, The Relationship Between Index Size and Page Size, Other Things Stored in the Index, Retrieving Database Page Metadata, Index Statistics, Best Practices on How to Design Database Indexes, Maintaining Indexes for Top Performance

Index fragmentation is a subject that I would like to cover next. 

The SQL Server Database Engine automatically maintains indexes whenever insert, update, or delete operations are made to the underlying data. Over time these modifications can cause the information in the index to become scattered in the database (fragmented). Fragmentation exists when indexes have pages in which the logical ordering, based on the key value, does not match the physical ordering inside the data file. Heavily fragmented indexes can degrade query performance and cause your application to respond slowly.

Detecting Fragmentation

The first step in deciding which defragmentation method to use is to analyze the index to determine the degree of fragmentation. By using the system function sys.dm_db_index_physical_stats, you can detect fragmentation in a specific index, all indexes on a table or indexed view, all indexes in a database, or all indexes in all databases. For partitioned indexes, sys.dm_db_index_physical_stats also provides fragmentation information for each partition.
The result set returned by the sys.dm_db_index_physical_stats function includes the following columns.

avg_fragmentation_in_percent value > 5% and < = 30%  ALTER INDEX REORGANIZE

avg_fragmentation_in_percent value  > 30% ALTER INDEX REBUILD WITH (ONLINE = ON) 



USE AdventureWorks2012;
GO
-- Find the average fragmentation percentage of all indexes
-- in the HumanResources.Employee table.
SELECT a.index_id, name, avg_fragmentation_in_percent
FROM sys.dm_db_index_physical_stats (DB_ID(N'AdventureWorks2012'), OBJECT_ID(N'HumanResources.Employee'), NULL, NULL, NULL) AS a
JOIN sys.indexes AS b ON a.object_id = b.object_id AND a.index_id = b.index_id;
GO
-- Reorganize the IX_Employee_OrganizationalLevel_OrganizationalNode index on the HumanResources.Employee table.
ALTER INDEX IX_Employee_OrganizationalLevel_OrganizationalNode ON HumanResources.Employee
REORGANIZE ;
GO
-- Reorganize all indexes on the HumanResources.Employee table.
ALTER INDEX ALL ON HumanResources.Employee
REORGANIZE ;
GO
ALTER INDEX PK_Employee_BusinessEntityID ON HumanResources.Employee
REBUILD;
GO


B-tree structure of a SQL Server index




Wednesday, August 8, 2012

Task 12: Report in Power View



I found a cool Sharepoint tutorial which learn you how to Create a Sample Report in Power View
http://technet.microsoft.com/en-us/library/hh759325%28v=sql.110%29.aspx

First of all I got many access problems with Sharepoint site:
1. Trying to create a new folder in Shared docuemnts I got the following error:

2. In New Document  I could not found the Report Data Source option. (If there is no Report Data Source option on the New Document tab, then someone with adequate permissions on the
SharePoint site needs to add that content type to the site. For more information, see Add Report Server Content Types to a Library (Reporting Services in SharePoint Integrated Mode).) 


The issues were resolved by our Configuration Engineer.
 You can use this tutorial anytime when you need to create a simple power view report using data source 
saving the excel file as SharePoint document library of PowerPivot Gallery. 

Task 11: Administering Microsoft SQL Server 2012 Databases certification


I passed the Querying Microsoft SQL Server 2012 certification two days ago and I still dream about all 13 questions I didn't answered correctly or completely. I'll continue with the next 2 certifications:


For 70-462 there is a Training Kit available to free download.

Friday, July 27, 2012

Task 10: Halloween Problem


Did you heard about Halloween Problem?

In computing, the Halloween Problem refers to a phenomenon in databases in which an update operation causes a change in the physical location of a row, potentially allowing the row to be visited more than once during the operation. This could even cause an infinite loop in some cases where updates continually place the updated record ahead of the scan performing the update operation.

Please have a look  to understand what is the meaning of this problem through an example:

-------------------------------------------------------------------------------
--- Halloween problem script ---
-------------------------------------------------------------------------------
CREATE TABLE halloweenProblem 
(
     Pk int NOT NULL PRIMARY KEY, 
     Name varchar(20), 
     Salary money
)
insert into halloweenProblem(pk, Name, Salary) values(1, 'Joe Blow', 20000.00)
insert into halloweenProblem(pk, Name, Salary) values(2, 'Joe Smith', 30000.00)
insert into halloweenProblem(pk, Name, Salary) values(3, 'Jane Doe', 40000.00)
insert into halloweenProblem(pk, Name, Salary) values(4, 'Boss Man', 50000.00)

DECLARE RaiseSalary CURSOR
FOR 
            SELECT salary, pk 
            FROM halloweenProblem 
            WHERE salary < 50000 
            ORDER BY pk
FOR UPDATE  OF pk

OPEN RaiseSalary 
FETCH RaiseSalary
             WHILE ( @@fetch_status =0)
             BEGIN

             UPDATE halloweenProblem 
             SET pk=pk*10  
             WHERE CURRENT OF RaiseSalary

             SELECT * FROM halloweenProblem 
             FETCH RaiseSalary
             END
CLOSE RaiseSalary 
DEALLOCATE RaiseSalary 


DECLARE abc CURSOR  FOR
             SELECT salary
             FROM halloweenProblem 
             WHERE salary < 50000 
             ORDER BY salary
FOR UPDATE OF salary

OPEN abc
GO

FETCH NEXT FROM abc
GO

UPDATE halloweenProblem  
SET salary= salary*2
WHERE CURRENT OF abc
GO
CLOSE abc
DEALLOCATE abc
------------------------------------------------------------------------------