| Hello My Expert DBA, Monday morning, I was in my cube, enjoying my morning tea and reviewing my daily email. As a DBA, you are aware of our daily responsibilities. LoL...Suddenly, one of our esteemed application owners knocked on my cubicle and whispered, "Zahid, my applicaiton is down!!!!!" I am familiar with his application database, which is the Orion Monitoring tool, and yes, it is a pretty large database with a size close 2 TB! I got into the box and ran sp_readerrorlog to check if there were any good insights. "The Server was rebooted," I discovered. Then I looked through the event log to check why Windows had rebooted, and I saw that there had been a windows update run over the weekend, which I interestingly looked into. The app's owner is aware of the upgrade. However, the DBA team was not advised of this upgrade. Because SCCM Administrator applies all update or patch/CU updates. Sometimes it matters, for example, if it is production, then apply those exact updates to Test Box before applying to Prod Box. It is also critical for all of us to be aware of any updates. Because this is a huge database, I would propose notifying the DBA team first so that the DBA can properly terminate SQL Server Service to prevent data/log file corruption. Let me talk about what happened on my database server...
Remember that if your database is in Recovery mode, you can't do anything.
Continue......... |
| Hello DBA's I just experienced a significant occurrence involving a database restoration failure. I chose to restore a database in order to test DR (Backup-Restore). As a Scary DBA, we should perform random database restore testing on our Test Server before any disaster strikes our actual Production Server. So that we can be certain that our backup is both valid and restorable. My "Test" Database has a memory-optimized table and a 575 GB database size. How to debug a memory optimized table database is a relatively new issue for me. Beginning with SQL Server 2014, a memory optimized table is essentially a table with two copies, one active and one durable on disk, whether that includes data or only structure. Because the memory has been wiped clean Because memory is flushed when SQL Services is restarted, SQL Server preserves a physical copy of the table that is recoverable.
Issue:--Restoration of "XYZ" database failed on our Test Server with the following error:--
Msg 41379, Level 16, State 0, Line 0
Restore operation failed for database 'XYZ' due to insufficient memory in the resource pool 'default'. Close other applications to increase the available memory, ensure that both SQL Server memory configuration and resource pool memory limits are correct and try again. See 'http://go.microsoft.com/fwlink/?LinkID=507574' for more information.
Msg 3456, Level 16, State 1, Line 2
Could not redo log record (310751:4331:4), for transaction ID (2:-741982062), on page (1:229325), allocation unit 72057637921488896, database 'XYZ' (database ID 7). Page: LSN = (310735:185062:258), allocation unit = 72057637921488896, type = 1. Log: OpCode = 2, context 2, PrevPageLSN: (310749:76109:7). Restore from a backup of the database, or repair the database.
Location: "e:\\b\\s3\\sources\\sql\\ntdbms\\hekaton\\engine\\core\\database.cpp":2663 Expression: db->TransactionMap->ComputeCount() == 0
SPID: 51
Process ID: 7392
Restore was successful but deferred transactions remain. These transactions cannot be resolved because there are data that is unavailable. Either use RESTORE to make that data available or drop the filegroups if you never need this data again. Dropping the filegroup results in a defunct filegroup.
RESTORE WITH CONTINUE_AFTER_ERROR was successful but some damage was encountered. Inconsistencies in the database are possible.
RESTORE DATABASE successfully processed 15513188 pages in 1867.005 seconds (64.915 MB/sec). Msg 3313, Level 16, State 1, Line 2 During redoing of a logged operation in database 'XYZ', an error occurred at log record ID (310751:4331:4). Typically, the specific failure is previously logged as an error in the Windows Event Log service. Restore the database from a full backup, or repair the database. I firmly believe that this is a fake error message indicating that the database should be restored from a full backup or repaired, and that some harm has occurred. So, I executed the restore verifyonly command, and it reported that my backup was valid/good. Without a doubt.....
Let's do more troubleshoot.....while restoring
- Error Log suggested that restoration failed because of insufficient memory to SQL Server
- From metadata information of backup file received by "Restore HeaderOnly" have shown in memory file stream table.
- Memory allocated to SQL Server got fully utilized before restoration was failed.
- On Production checked the memory usage of "memory optimize tables" feature, which came to 70 GB
- Test Server OS memory was 60 GB
Let's find out memory usage table, basically i will look for very interesting thing which is Memory Optimized Table using GUI...
Even Not much clear.. for me. Let's fo a further investigate ....usnig query Resolution: I provided 60 GB+40 GB =100 GB memory to SQL Server and the database was properly restored, but I requested an additional 5 GB totaling 105 GB from SySOps. If the server on which I restore a database does not have enough accessible memory for the memory-optimized tables in the backup, the database will not come online and will be tagged as Suspect mode/Recovery Pending. After all of this effort, I was able to restore the "Test" database on our DBA QA/Test Server. What are your thoughts? Please leave your thoughts!!! In the following essay, I will discuss how I addressed TB over Database in Recovery Pending State....Yes, I had this terrible scenario on my production box and I spent over 11 hours on it, as you may have guessed. Cheers, Zahid (Sagar) |
| Need Help.......Please....... |
| I********nstall SQL Server and Do Post Configuration just a Signle Click...Because if you don't want to waste your time sitting next to the monitor.... ************* !!Automate Development Process Operations!! As we all know, 100% is not achievable, but you can automate your daily Server Administrative activities. We should develop a stronger Sustained Engineering strategy for eventually automating some or all operations or administration. Okay, don't relocate all of your operations all at once. I would recommend automating some procedures and then moving on to the next level when you believe the operation is stable.... I operate my server administration in an entirely automated, GUI-free, using SQLCMD, PowerShell, and Windows Batch files... Let's talk about what I do and what you can do in your own setting.... - Database creation/clonning, multiple database Refresh/Restore/Backup and database migration these are fully automated using custom Stored Procedure, SQLCMD
- Download SQL .bak file from FTP Server and Restore onto the Databse Server using automated WINSCP command!!!
- Deploy code on multiple databses and multiple servers using SQLCMD
- Logging all database deployment process into a central database table using SQLCMD where it keeps all records like who deployed code, who took the backup and who restore the database and when and so many
- Creat/Alter/Drop SQL Logins and alter SQL Login's password multiple servers just a one click
- Install SQL Server in a silent mode, install latest service pack using CMD, add SQL port using PowerShell and reboot the server using cmd command line and YES ALL THESE ARE WRAPED UP in a single BATCH FILE JUST YOU NEED a CLICK and grabe a coffee!!! CUZ IF DONT LIKE GUI LIKE NEXT NEXT NEXT!!!
- What about Post Configuration??? YES I RUN a PS Script that call 17 SQL Scripts such as Backup job, Index Job, memory settings, Database Mail configuration, Default Alets etc etc .....which will take care your instance ready to go for LIVE Porduction!!!
- Bulk data load into the warehouse databse using BCP CMD utility....
For more details please contact me Cheers, Zahid |
| There was a requiremnet to create Non-Cluster Index on multiple columns in 200 tables!!! Crazyyyy!!! Isn't it???? !!! Even 1000 (One Thousands) tables i don't care !!! Can we automate this process dynmically using SQLCMD & T-SQL? ***Yes---We Can*** SELECT ' CREATE NONCLUSTERED INDEX [ NIDX_' + SYSCOLUMNS.NAME + '] ON ' + SYSOBJECTS.NAME + .............................GO ---To get the complete code please Contact Me Cheers, Zahid |
| Hi All, I had a request where there were more than a thousands dat file to be imported to the SQL Server. First, i convert those dat files to txt files. Here is the following command :-- rename *.* *.txt Thanks Zahid |
| I would recommend coding in a couple scenarios in order to catch true errors during script deployment, particularly when we conduct a query that updates a section of data and fails. When this occurs, they must redo their work to tidy things up. Furthermore, it may compromise data integrity, causing database destruction. To avoid this, I always advise developers to encapsulate their SQL code in a template like—
BEGIN
BEGIN TRY
BEGIN TRAN
---Add SQL Logic here
COMMIT TRAN
END TRY
BEGIN CATCH
ROLLBACK TRAN
SELECT
ERROR_NUMBER() AS ErrorNumber
,ERROR_SEVERITY() AS ErrorSeverity
,ERROR_STATE() AS ErrorState
,ERROR_PROCEDURE() AS ErrorProcedure
,ERROR_LINE() AS ErrorLine
,ERROR_MESSAGE() AS ErrorMessage;
END CATCH
END; Thanks , Zahid |
| Install Virtualmin, Nginx 1.10, PHP 7, MariaDB 10 on Linux Server Virtualmin is a powerful and flexible web hosting control panel for Linux and BSD systems. Virtualmin is the cost-effective and comprehensive solution to virtual web hosting management. If you want to build your web-site and host your mail server please Contact Me Zahid |
| This is where I'll be sharing my thoughts on SQL topics that matter to me. Who knows... I might even share documentaion, videos and links to other interesting stuff. If I catch your interest, let me hear from you... Zahid |
Follow javascript: SP.SOD.executeFunc('followingcommon.js', 'FollowDoc', function() { FollowDoc('{ListId}', {ItemId}); }); 0x0 0x0 ContentType 0x01 1100 Compliance Details javascript:if (typeof CalloutManager !== 'undefined' && Boolean(CalloutManager) && Boolean(CalloutManager.closeAll)) CalloutManager.closeAll(); commonShowModalDialog('{SiteUrl}'+
'/_layouts/15/itemexpiration.aspx'
+'?ID={ItemId}&List={ListId}', 'center:1;dialogHeight:500px;dialogWidth:500px;resizable:yes;status:no;location:no;menubar:no;help:no', function GotoPageAfterClose(pageid){if(pageid == 'hold') {STSNavigate(unescape(decodeURI('{SiteUrl}'))+
'/_layouts/15/hold.aspx'
+'?ID={ItemId}&List={ListId}'); return false;} if(pageid == 'audit') {STSNavigate(unescape(decodeURI('{SiteUrl}'))+
'/_layouts/15/Reporting.aspx'
+'?Category=Auditing&backtype=item&ID={ItemId}&List={ListId}'); return false;} if(pageid == 'config') {STSNavigate(unescape(decodeURI('{SiteUrl}'))+
'/_layouts/15/expirationconfig.aspx'
+'?ID={ItemId}&List={ListId}'); return false;}}, null); 0x0 0x1 ContentType 0x01 898 Document Set Version History /_layouts/15/images/versions.gif?rev=40 javascript:SP.UI.ModalDialog.ShowPopupDialog('{SiteUrl}'+
'/_layouts/15/DocSetVersions.aspx'
+ '?List={ListId}&ID={ItemId}') 0x0 0x0 ContentType 0x0120D520 330 Send To other location /_layouts/15/images/sendOtherLoc.gif?rev=40 javascript:GoToPage('{SiteUrl}' +
'/_layouts/15/docsetsend.aspx'
+ '?List={ListId}&ID={ItemId}') 0x0 0x0 ContentType 0x0120D520 350
|
My Dear TechyFrndZ, Welcome to the lunch www.zclouddata.com and imporved web based application and my first blog post High on SQL!!! I am so excited to share with you MORE of what oderly has to offer! The amount of ideas, how-to's and inspiration whirling around in my head is ready to jump out for critical problem solutions like Microsoft SharePoint Server, SQL Server, Windows Server and Linux Server I wanted to start by telling you why i created this blog what you will be seeing more of in my High on SQL posts, and my overall experience of cloud based and on-premise SQL Database Administration. Cheers, Zahid Ahamed (Sagar)
|
|