Archive
SQL Query for truncate all tables in a database in SQL/SQL Azure
How to delete or truncate all the data from all the tables in a database
For clean up the database or switch to a new database, we may need to truncate or delete all tables in the database. Its not easy to select and delete all tables in a database as the database may having n number of tables. We can get all tables in a database from sys objects and we can apply delete or truncate script to each and every table from a single cursor.
SQL script for delete/truncate all tables in a database in SQL/SQL Azure
The following script truncates all tables in the selected database. If you want to delete all tables in the database we can edit this query with delete statement instead of truncate statement.
DECLARE @GetObjectName AS CURSOR DECLARE @StringVal AS nvarchar(max) DECLARE @DBObjName AS nvarchar(100) DECLARE @Type as Varchar(2) SET @GetObjectName = CURSOR FOR Select type,Name from sys.sysobjects where type in('U') OPEN @GetObjectName FETCH NEXT FROM @GetObjectName INTO @Type,@DBObjName WHILE @@FETCH_STATUS = 0 BEGIN Set @StringVal = 'truncate table ' + @DBObjName exec (@StringVal) FETCH NEXT FROM @GetObjectName INTO @Type,@DBObjName END CLOSE @GetObjectName DEALLOCATE @GetObjectName
How to drop all stored procedures (sps) and functions from a database
The following script drop all procedures and user defined functions from a database.
DECLARE @GetObjectName AS CURSOR DECLARE @StringVal AS nvarchar(max) DECLARE @DBObjName AS nvarchar(100) DECLARE @Type as Varchar(2) SET @GetObjectName = CURSOR FOR Select type,Name from sys.sysobjects where type in('P','FN','TF') AND Name not in ('DBM_EnableDisableAllTableConstraints') OPEN @GetObjectName FETCH NEXT FROM @GetObjectName INTO @Type,@DBObjName WHILE @@FETCH_STATUS = 0 BEGIN IF @Type='P' BEGIN Set @StringVal = 'Drop Procedure ' + @DBObjName exec (@StringVal) END ELSE IF @Type='FN' OR @Type='TF' BEGIN Set @StringVal = 'Drop Function ' + @DBObjName exec (@StringVal) END FETCH NEXT FROM @GetObjectName INTO @Type,@DBObjName END CLOSE @GetObjectName DEALLOCATE @GetObjectName
How to find the size and cost of sql azure database
SQL query to find the cost and size of sql azure database
SQL Azure database is a paid service as per the usage of the database. So we should have an idea regarding our usage of data in the sql azure database. Then only we can use it as cost effective product. So we need to regularly checking the size and cost of our sql azure database.
Query to find the size and cost of SQL azure entire database
The following query will return the size and cost of the sql azure database. The latest price of the azure storage can be seen here. http://www.microsoft.com/windowsazure/pricing/
The following query will give the size and price of the database as per the following price:
DECLARE @SizeInBytes bigint SELECT @SizeInBytes = (SUM(reserved_page_count) * 8192) FROM sys.dm_db_partition_stats DECLARE @Edition sql_variant SELECT @Edition =DATABASEPROPERTYEX ( DB_Name() , 'Edition' ) SELECT (CASE WHEN @SizeInBytes/1073741824.0 < 1 THEN (CASE @Edition WHEN 'Web' THEN 9.99 ELSE 99.99 END) WHEN @SizeInBytes/1073741824.0 < 5 THEN (CASE @Edition WHEN 'Web' THEN 49.95 ELSE 99.99 END) WHEN @SizeInBytes/1073741824.0 < 10 THEN 99.99 WHEN @SizeInBytes/1073741824.0 < 20 THEN 199.98 WHEN @SizeInBytes/1073741824.0 < 30 THEN 299.97 WHEN @SizeInBytes/1073741824.0 < 40 THEN 399.96 WHEN @SizeInBytes/1073741824.0 < 50 THEN 499.95 END) / @SizeInBytes
Query to find size and cost of SQL Azure datbase as per indexes
The following query will return the size and cost of sql azure dtabase’s indexes.
DECLARE @SizeInBytes bigint SELECT @SizeInBytes = (SUM(reserved_page_count) * 8192) FROM sys.dm_db_partition_stats DECLARE @Edition sql_variant SELECT @Edition =DATABASEPROPERTYEX ( DB_Name() , 'Edition' ) DECLARE @CostPerByte float SELECT @CostPerByte = (CASE WHEN @SizeInBytes/1073741824.0 < 1 THEN (CASE @Edition WHEN 'Web' THEN 9.99 ELSE 99.99 END) WHEN @SizeInBytes/1073741824.0 < 5 THEN (CASE @Edition WHEN 'Web' THEN 49.95 ELSE 99.99 END) WHEN @SizeInBytes/1073741824.0 < 10 THEN 99.99 WHEN @SizeInBytes/1073741824.0 < 20 THEN 199.98 WHEN @SizeInBytes/1073741824.0 < 30 THEN 299.97 WHEN @SizeInBytes/1073741824.0 < 40 THEN 399.96 WHEN @SizeInBytes/1073741824.0 < 50 THEN 499.95 END) / @SizeInBytes SELECT idx.name, SUM(reserved_page_count) * 8192 'bytes', (SUM(reserved_page_count) * 8192) * @CostPerByte 'cost' FROM sys.dm_db_partition_stats AS ps INNER JOIN sys.indexes AS idx ON idx.object_id = ps.object_id AND idx.index_id = ps.index_id WHERE type_desc = 'NONCLUSTERED' GROUP BY idx.name ORDER BY 3 DESC
Query to find out the size and cost of each tables in SQL Azure database
The following query return the cost per month per row for every table in the database. DECLARE @SizeInBytes bigint SELECT @SizeInBytes = (SUM(reserved_page_count) * 8192) FROM sys.dm_db_partition_stats DECLARE @Edition sql_variant SELECT @Edition =DATABASEPROPERTYEX ( DB_Name() , 'Edition' ) DECLARE @CostPerByte float SELECT @CostPerByte = (CASE WHEN @SizeInBytes/1073741824.0 < 1 THEN (CASE @Edition WHEN 'Web' THEN 9.99 ELSE 99.99 END) WHEN @SizeInBytes/1073741824.0 < 5 THEN (CASE @Edition WHEN 'Web' THEN 49.95 ELSE 99.99 END) WHEN @SizeInBytes/1073741824.0 < 10 THEN 99.99 WHEN @SizeInBytes/1073741824.0 < 20 THEN 199.98 WHEN @SizeInBytes/1073741824.0 < 30 THEN 299.97 WHEN @SizeInBytes/1073741824.0 < 40 THEN 399.96 WHEN @SizeInBytes/1073741824.0 < 50 THEN 499.95 END) / @SizeInBytes SELECT sys.objects.name, sum(reserved_page_count) * 8192 'Bytes', row_count 'Row Count', (CASE row_count WHEN 0 THEN 0 ELSE (sum(reserved_page_count) * 8192)/ row_count END) 'Bytes Per Row', (CASE row_count WHEN 0 THEN 0 ELSE ((sum(reserved_page_count) * 8192)/ row_count) * @CostPerByte END) 'Monthly Cost Per Row' FROM sys.dm_db_partition_stats, sys.objects WHERE sys.dm_db_partition_stats.object_id = sys.objects.object_id GROUP BY sys.objects.name, row_count
How to upload large size file/blob to azure storage using ASP.Net,C#
Tool to upload local files into azure storage.
We have already demonstrated an application to upload files into azure storage using simple C# application in one of our previous post. Using that application we can upload local folder files into azure storage very easily and we can see that error log for each file uploading process. That is working fine for up to some size of the file (we are checking with 1 GB ) it is working fine. But when we are trying to upload larger than 3 GB files using same codes in one of our live project, got error in the middle of uploading files into azure storage. Then we are found out another solution for uploading very large size file into azure storage blob.
How can upload large size files into azure storage using C#, ASP.Net
When we are trying to upload very large size local files into azure storage blob, we are getting error related to timeout. To resolve this issue, we need to split up our local large size files into different small packages, then upload and after the upload successfully re pack the file again. This parallel file upload can be achieved by following code. We can upload very huge files to azure blob from a web page in ASP.Net,C# by using following code.
public void ParallelDownloadToFile(CloudBlockBlob blob, string fileName, int maxBlockSize) { try { // refresh the values blob.FetchAttributes(); long fileSize = blob.Attributes.Properties.Length; var filePath = Path.GetDirectoryName(fileName); var fileNameWithoutPath = Path.GetFileNameWithoutExtension(fileName); // let's figure out how big the file is here long leftToRead = fileSize; int startPosition = 0; // have 1 block for every maxBlockSize bytes plus 1 for the remainder var blockCount = ((int)Math.Floor((double)(fileSize / maxBlockSize))) + 1; // setup the control array BlockTransferDetail[] transferDetails = new BlockTransferDetail[blockCount]; // create an array of block keys string[] blockKeys = new string[blockCount]; var blockIds = new List<string>(); // populate the control array... for (int j = 0; j < transferDetails.Length; j++) { int toRead = (int)(maxBlockSize < leftToRead ? maxBlockSize : leftToRead); string blockId = Path.Combine(filePath, string.Format("{0}_{1}.dat", fileNameWithoutPath, j.ToString("00000000000"))); if (startPosition < 0) startPosition = startPosition * -1; if (toRead < 0) toRead = toRead * -1; transferDetails[j] = new BlockTransferDetail() { StartPosition = startPosition, BytesToRead = toRead, BlockId = blockId }; if (toRead > 0) { blockIds.Add(blockId); } // increment the starting position startPosition += toRead; leftToRead -= toRead; } // now we do a || download of the file. var result = Parallel.For(0, transferDetails.Length, j => { // get the blob as a stream try { using (BlobStream stream = blob.OpenRead()) { Thread.Sleep(10000); stream.Seek(transferDetails[j].StartPosition, SeekOrigin.Begin); // setup a buffer with the proper size byte[] buff = new byte[transferDetails[j].BytesToRead]; // read into the buffer stream.Read(buff, 0, transferDetails[j].BytesToRead); using (Stream fileStream = new FileStream(transferDetails[j].BlockId, FileMode.Create, FileAccess.Write, FileShare.None)) { using (BinaryWriter bw = new BinaryWriter(fileStream)) { bw.Write(buff); bw.Close(); } } buff = null; } } catch (Exception ex) { throw; } }); // assemble the file into one now... using (Stream fileStream = new FileStream(fileName, FileMode.Append, FileAccess.Write, FileShare.None)) { using (BinaryWriter bw = new BinaryWriter(fileStream)) { // loop through each of the files on the disk for (int j = 0; j < transferDetails.Length; j++) { // read them into the file (append) bw.Write(File.ReadAllBytes(transferDetails[j].BlockId)); // and then delete them File.Delete(transferDetails[j].BlockId); } } } transferDetails = null; } catch (Exception ex) { throw; } } public static class BlobExtensions { static readonly string DFDriveEnvVarName = "AZURE_DRIVE_DEV_PATH"; static readonly string containername = RoleEnvironment.GetConfigurationSettingValue("Container") .ToLowerInvariant(); public static bool Exists(this CloudBlob blob, string folderName) { try { //test doesnt work in df if (RoleEnvironment.DeploymentId.ToLowerInvariant().StartsWith("deployment(")) { string path = Environment.GetEnvironmentVariable(DFDriveEnvVarName); path += "\\devstoreaccount1\\"; path += containername; path += "\\"; path += folderName; //path += blob.Uri.Segments.Last(); if (Directory.Exists(path)) return true; else return false; } else { blob.FetchAttributes(); return true; } } catch (StorageClientException e) { if (e.ErrorCode == StorageErrorCode.ResourceNotFound) { return false; } else { throw; } } } }
By using the above code we can easily upload very large size files from local folder to azure blob storage. What the function does is, splitting the file stream into different byte packets and start uploading these small pieces of files into blob storage so there is no timeout issue generated. The above function is working well and we are implemented in one of our project also. We can uploaded large size files (Up to larger than 2GB checked, shall working for heavy large file also) into azure blob storage in our ASP.Net mvc application.
How to sync large size database with sync framework
By using sync framework 2.1 we can easily sync databases frequently by scheduling sync with some interval. In one of our project we have to give chance for client to schedule sync frequently. We have implemented the functionalities using sync framework 2.1 with ASP.Net mvc application. You can go through the main codes for sync here.
It was working fine until we are testing with large size db (greater than 10 GB). When we are testing with large size db, we got error in the middle of sync process.
The error is “There is no enough space for the disk”. Then we are increased the size of the target db to 20 GB (source db 10GB) but got same error. We are search on the google and cannot found enough support for the issue. We are going behind the issue and after some days we have found the root cause of the issue. It was not related to sync framework. When sync process is running, a log file will be writing as backend process to the location “C:\Resources\temp\7aad4070ce51495c82cde6b1d410aa65.WorkerRole1\RoleTemp” of Virtual Machine in the worker role (WaHostBootstrapper). The size of this log file will be increasing continuously and there is some limitation for the size of the file (normally 60 mb). Obviously it will take long time to sync large db and log file size increased and crashed once the max size exceeded. At last we have found the solution and now our application can sync large db without any error. The solution is given below.
1. We need to extend the max size of the log file. We can achieve it by following code in the “ServiceDefinition.csdef”
<LocalResources>
<LocalStorage name=”localStoreOne” sizeInMB=”20480″ cleanOnRoleRecycle=”true” />
</LocalResources>
Name ( localStoreOne) : indicates your local storage name
sizeInMB : indicates the maximum size of the log file you want to give.
cleanOnRoleRecycle : It will delete and recreated log file for each workerrole when it is set to true
2. In the “OnStart()” method in the workerrole we need to map the temp folder by using following code.
string customTempLocalResourcePath =
RoleEnvironment.GetLocalResource(“localStoreOne”).RootPath;
Environment.SetEnvironmentVariable(“TMP”, customTempLocalResourcePath);
Environment.SetEnvironmentVariable(“TEMP”, customTempLocalResourcePath);
Then we can see that log file is archived in the middle of sync with some size, and can hold up to the size that we mentioned in the ServiceDefenition file. It is better to cleanOnRoleRecycle is set to true, then it will automatically deleted the log files once the worker role restart and recreated again.
Compress and extract file using Gzipstream
I have faced situation for downloading large blob files from azure server to local folder. It was easy to download the file from azure server. But if file has more than 2 GB (means larger size) it will got error in the middle of the downloading. When I was facing the issue, I was search on google and not found feasible solution soon that’s why I posted this compress and decompress codes here..
using System.IO.Compression;
Compress files ….
public static void Compress(FileInfo fi)
{
// Get the stream of the source file.
using (FileStream inFile = fi.OpenRead())
{
// Prevent compressing hidden and
// already compressed files.
if ((File.GetAttributes(fi.FullName)
& FileAttributes.Hidden)
!= FileAttributes.Hidden & fi.Extension != “.gz”)
{
///**
//string[] filesplits = fi.FullName.Split(‘.’);
// Create the compressed file.
using (FileStream outFile =
File.Create(fi.FullName + “.gz”))
{
using (GZipStream Compress =
new GZipStream(outFile,
CompressionMode.Compress))
{
// Copy the source file into
// the compression stream.
inFile.CopyTo(Compress);
Console.WriteLine(“Compressed {0} from {1} to {2} bytes.”,
fi.Name, fi.Length.ToString(), outFile.Length.ToString());
}
}
}
}
}
Decompress compressed file..
public void Decompress(FileInfo fi)
{
// Get the stream of the source file.
using (FileStream inFile = fi.OpenRead())
{
// Get original file extension, for example
// “doc” from report.doc.gz.
string origName = curFile.Remove(curFile.Length –
fi.Extension.Length);
//Create the decompressed file.
using (FileStream outFile = File.Create(origName))
{
using (GZipStream Decompress = new GZipStream(inFile,
CompressionMode.Decompress))
{
// Copy the decompression stream
// into the output file.
Decompress.CopyTo(outFile);
Console.WriteLine(“Decompressed: {0}”, fi.Name);
}
}
}
}
How to Sync schema changed database using sync framework?
I was working on azure database synchronization for the last month and successfully completed the task. In my case both databases, source and target are in the azure. We can also done the sync from azure to local also. Below are the codes for using sync two azure database.
Step by Step process of sync databases using sync framework :
1. Create both source db and target db connection strings
2. Create DbSyncScopeDescription object ‘myscope’ for describing our scope.
3. Add tables to the ‘myscope’ that we need to sync
4. Do the provisioning process for both source db and target db.
5. Set memory allocation to the database providers
6. Set application transaction size on destination provider.
7. Create object for class SyncOrchestrator and call Synchronize() method.
public static void Setup(string sqlSourceConnectionString, string sqlTargetConnectionString, string scopeName, int scopeid, Scope ScopeDetails) { try { SqlConnection sqlServerConn = new SqlConnection(sqlSourceConnectionString); SqlConnection sqlAzureConn = new SqlConnection(sqlTargetConnectionString); DbSyncScopeDescription myScope = new DbSyncScopeDescription(scopeName); string connectionstring = ConfigurationSettings.AppSettings["ConnectionString"]; SqlDataReader myDataReader = null; SqlConnection conn = new SqlConnection(connectionstring); //Adding tables that we need to sync from the source db SqlCommand cmd = new SqlCommand("Select TableName from scheduler_tables where scopeid =" + scopeid, conn); conn.Open(); myDataReader = cmd.ExecuteReader(); while (myDataReader.Read()) { DbSyncTableDescription TestSchema1 = SqlSyncDescriptionBuilder.GetDescriptionForTable(Convert.ToString (myDataReader["TableName"]), sqlServerConn); // Add the tables from above to the scope myScope.Tables.Add(TestSchema1); } // Setup SQL Server for sync SqlSyncScopeProvisioning sqlServerProv = new SqlSyncScopeProvisioning (sqlServerConn, myScope); sqlServerProv.CommandTimeout = 60 * 30; if (!sqlServerProv.ScopeExists(scopeName)) { // Apply the scope provisioning. sqlServerProv.Apply(); } // Setup SQL Azure for sync SqlSyncScopeProvisioning sqlAzureProv = new SqlSyncScopeProvisioning (sqlAzureConn, myScope); sqlAzureProv.CommandTimeout = 60 * 30; if (!sqlAzureProv.ScopeExists(scopeName)) { sqlAzureProv.Apply(); } sqlAzureConn.Close(); sqlServerConn.Close(); Sync(sqlSourceConnectionString, sqlTargetConnectionString, ScopeDetails, instanceID); } catch (Exception ex) { throw; } } public static void Sync(string sqlSourceConnectionString, string sqlTargetConnectionString, Scope ScopeDetails) { try { SqlConnection sqlServerConn = new SqlConnection(sqlSourceConnectionString); SqlConnection sqlAzureConn = new SqlConnection(sqlTargetConnectionString); SqlSyncProvider RemoteProvider = new SqlSyncProvider (ScopeDetails.ScopeName, sqlAzureConn); SqlSyncProvider LocalProvider = new SqlSyncProvider (ScopeDetails.ScopeName, sqlServerConn); //Set memory allocation to the database providers RemoteProvider.MemoryDataCacheSize = MemorySize; LocalProvider.MemoryDataCacheSize = MemorySize; //Set application transaction size on destination provider. RemoteProvider.ApplicationTransactionSize = BatchSize; //Count transactions RemoteProvider.ChangesApplied += new EventHandler (RemoteProvider_ChangesApplied); SyncOrchestrator orch = new SyncOrchestrator(); orch.RemoteProvider = RemoteProvider; orch.LocalProvider = LocalProvider; orch.Direction = SyncDirectionOrder.Upload; String syncdetails; syncdetails = ShowStatistics(orch.Synchronize()); sqlAzureConn.Close(); sqlServerConn.Close(); } catch (Exception ex) { throw; } } public static string ShowStatistics(SyncOperationStatistics syncStats) { string message; syncStartTime = syncStats.SyncStartTime.ToString(); message = "\tSync Start Time :" + syncStats.SyncStartTime.ToString(); //Console.WriteLine(message); syncEndTime = syncStats.SyncEndTime.ToString(); message = message + "\tSync End Time :" + syncStats.SyncEndTime.ToString(); //Console.WriteLine(message); message = message + "\tUpload Changes Applied :" + syncStats.UploadChangesApplied.ToString(); //Console.WriteLine(message); message = message + "\tUpload Changes Failed :" + syncStats.UploadChangesFailed.ToString(); //Console.WriteLine(message); message = message + "\tUpload Changes Total :" + syncStats.UploadChangesTotal.ToString(); //Console.WriteLine(message); message = message + "\tDownload Changes Applied :" + syncStats.DownloadChangesApplied.ToString(); //Console.WriteLine(message); message = message + "\tDownload Changes Failed :" + syncStats.DownloadChangesFailed.ToString(); //Console.WriteLine(message); message = message + "\tDownload Changes Total :" + syncStats.DownloadChangesTotal.ToString(); //Console.WriteLine(message); return message; }
The above codes are working fine until any schema changes occurred in the source database. If we made any schema changes on the source sync will be failed.
In my experience, I have added a new column in the source db called ‘isActive’ and got error once I trying to sync again “invalid column name isActive’”.
The cause of the issue is when we sync first time, will created so many tables, stored procedures and triggers by sync framework to identify that which areas to be taken sync in future or which areas having changes after the previous sync.
So we need to clear all these data related to sync, then only it will take any schema changes in the source db. For delete this records Microsoft providing a class SqlSyncScopeDeprovisioning and method DeprovisionStore() for removing all these data. So if we have any schema changes in the source database, we need to create a object of the class and call this method before applying the provision. Below are the codes to de provision the entire db,
SqlSyncScopeDeprovisioning deprovisioningvar = new SqlSyncScopeDeprovisioning(sqlServerConn);
deprovisioningvar.DeprovisionStore();
Then it is working fine ..! The problem is that it will clear all the data regarding the previous sync so it will take long time to sync first time, for the second time onwards there is no need to do the deprovison until any new schema changes occurred.
Recent Comments