Microsoft SQL Server “Most Valuable Professional” MVP << I am one #mvpbuzz

1780-mvp_horizontal_fullcolor-550x0

Today I received an email notifying me that I have been awarded the Microsoft “MVP” award. An excerpt from Microsoft’s website states

The Microsoft Most Valuable Professional (MVP) Award is our way of saying thank you to exceptional, independent community leaders who share their passion, technical expertise, and real-world knowledge of Microsoft products with others.”  reference

It is a huge honor to be given this award just as it has been extremely humbling to be nominated for this award by friends within the SQL Server community. I feel very lucky to be part of the awesome SQL Server community, to have made friends all over the globe and now to be part of the MVP community.

Little did I know how much my life was going to change when on 10/28/2008 my boss told me that our training budget got reinstated and did I want to go to the PASS Summit. I quickly registered and when I arrived in Seattle and felt the energy of our community I was hooked. During that Summit I was attending a session on DMV’s when Grant Fritchey shared with the audience about SQL Saturday’s.  I took note of it, attended the next two in the South East USA and then started speaking at them.

My passion and desire for learning never let up. I have attended every PASS Summit since, attended or spoke at dozens of events the past few years, contributed on forums, written blogs, been a PASS Regional Mentor, Chapter Leader, PASS Big Brother, etc.  As much as I try to give back I still feel like I am receiving more from this community then I can contribute. The friendships I have made and the lives I have been able to touch make all the time and energy put into it worth it.

There are so many others deserving of this award world wide. If you know someone who consistently gives of themselves back to the Microsoft community please take time to nominate them today.

Share
Posted in Presentations, SQLServerPedia Syndication | Tagged , , | 5 Comments

I am speaking at the PASS Summit 2014

I am super excited to announce that I have been selected to speak at the 2014 PASS Summit. This is considered the Super Bowl of SQL Server Conferences. It is the worlds largest gathering of SQL minded nerds. Consider it a SQL Server Nerd Herd.

Kidding aside it is the best opportunity to learn as much as possible about SQL Server. There will be over 170 different sessions to pick from over the course of three days as well as the option to attend two days worth of precons. Attending my first PASS Summit in 2008 changed my life and I have not missed one since.

I am very excited that the PASS Summit will be back in Seattle this year and that I get to be counted among the speaker pool.

SpeakingSummit2014

Share
Posted in SQLServerPedia Syndication, Training/Events | 2 Comments

SQL Server High Availability and Disaster Recovery Options

Working with Microsoft SQL Server for many years I have spent a lot of time discussing the importance of the availability of SQL Server databases.  Questions that always come up when discussing availability of the data is “Recovery Time Objective – RTO” and “Recovery Point Objective – RPO”.  Both questions are very important when determining your solution for high availability (HA) within your data center as well as your solution for disaster recovery (DR).

Microsoft SQL Server gives us several options for mitigating potential risk for our SQL Server environments.  Each solution has its pros and cons and careful consideration should go into your solution.  Before you can build a solution you have to have requirements on what risk you are trying to protect against. Some items you might want to protect against are OS failure, hardware failure, data corruption, or a data center failure. We have different options to help mitigate these potential failures and each solution comes with a certain cost and level of complexity.  For organizations, they have to weigh the cost of the solution and complexity to manage it against the actual risk. I like to say that it comes down to a math problem that usually involves a budget. What are some technologies we typically see implemented to address HA/DR with SQL Server? Below you will see a chart I like to use that demonstrates some of the pros and cons of Log Shipping, Database Mirroring, Replication, Windows Failover Clusters, Availability Groups, and Virtualization (not a SQL technology) HA_DR_Options

Many times when discussing HA and DR people tend to confuse or mix the two.  HA is a system designed that allows for minimal downtime, typically this is for protection from an OS or hardware failure.  DR is risk avoidance on a much larger scale. When discussing DR we typically cover risk management, RPO, RTO and build a disaster recovery plan. DR typically involves a second data center whereas HA is typically building redundancy within your data center.

As you can see from the list above, all but virtualization provide both hardware and OS level protection.  Log Shipping is the only solution that provides a load delay in synchronizing data. This is a very important feature that can help you protect against an accidental data oops.  Imagine ingesting bad that would require you to restore a database, or have an accident where an update/delete statement was ran without a where clause.  If you were using replication, mirroring or any other HA solution that provides near real time replication, those transactions would also be applied against your replica.  If you had a load delay of 12 to 24 hours you could roll the logs to just before the accident and be back online much quicker than restoring the database.

As previously stated, in order to know which solution is best for you, you really have to know what your requirements are. For me, I typically use a combination of most of the solutions depending on my environment. I have a combination of log shipping, failover clustering, availability groups and virtualization in place. For very large critical environments, a log shipped secondary provides a nice level of comfort knowing I can bring a multi terabyte database back online in minutes in the event I have a data issue.

 

 

 

Share
Posted in Backups/Recovery, SQLServerPedia Syndication | 1 Comment

Know Backups and Know Restores – SQL Server Restores

A couple of years ago I put together a training session to demonstrate various types of backups and restores. The driving factor behind me building this session was the response I got from talking to a dozen or so DBA’s about disaster recovery. I asked how many have ever performed a restore where you had to apply transaction logs.  I expected everyone to raise their hand but I only had one person.  This was from a group where everyone was responsible for backup and recovery of their environment.

With something so simple I realized there is a big need for demonstrating these actions so that at least everyone is familiar with the process. I had to spice things up since performing backups and restores are not all that sexy. In my presentation I demonstrate traditional backups using full and log, then full with differential and logs, a file group level backup with piecemeal restore and finally tail log backups.

With the tail log backup I backup the tail end of the transaction log after deleting the data file as well as attaching a transaction log from a database on one virtual machine to a database on another virtual machine and still back up the tail end of the log.  This is always well received.

During my presentation I demo a set of scripts that I use to generate the restore script. I share that many have adapted these scripts to their production routines to generate their restore script after every backup. This helps tremendously for restore validations as well as production restores and could also be leveraged in your DR procedures.

I have included a download of my sample databases, scripts and presentation here >>> KnowBackups

 

Share
Posted in Backups/Recovery | Tagged , , , | 1 Comment

Making a Difference – #SQLFamily

When I started with the SQL Community I was first encourage to start blogging which I started in Sept of 2010. While at the PASS Summit in 2010 I was encourage to submit to speak at a SQL Saturday. I submitted my first ever session for SQL Saturday #62 and got selected to speak. This event was held on Jan 15th 2011.  Thank you to Jose Chinchilla for encouraging me to submit. (He didn’t really give me a choice)

It has been an incredible journey since then with speaking at events in numerous states within the US and multiple countries.  My most recent out of the country trip was to Costa Rica to speak at SQL Saturday 282.  I am very fortunate to have a good friend who lives there who encouraged me to come visit.  I met Kenneth a couple of years prior at a SQL Skills class in Tampa Florida.  A nice thing that SQL Skills does is one night during each week of training they have a “free for all” where the attendees can get up and present a short talk. I took advantage of this and gave a trimmed down version of my session “Getting Involved and Getting Ahead” where I share my journey with volunteering in the SQL Community.

Little did I know that giving this talk would have such a profound impact on someone in the audience. This is where I met my now good friend Kenneth Urena. After my talk Kenneth approached me with lots of questions on how to do this himself. I shared everything I knew. I talked about local PASS Chapters, virtual chapters, SQL Saturdays, etc.  We connected with each other on social media so we could stay in touch and we have.

While at Summit this past year in Charlotte NC we made a deal, if I come to Costa Rica and present a session that Kenneth would come to Atlanta and present.  We both held up our end of the agreement however what I did not realize is that it would be the first time my friend presented a technical session in English.  I did not realize this until an hour before his session. Kenneth sent me a text to come help him review his slides during lunch (his session was right after lunch). When I arrived he was very nervous but we went through his material and he was awesome.  His language was excellent, his slides were great and he knew the material very well. My only advice was to take things slow, not to rush and speak clearly. I hope this helped to calm his nerves because when he got to his room it was packed.

You should read Kenneth’s account of how he came to start speaking and how his session went in Atlanta. The interaction with the room monitor is worth reading the post. http://sqlcr.com/content/my-first-sql-saturday-english-speaker-few-years-journey

Kenneth, thank you for giving back to the SQL Community and helping to extend the #SQLFamily to Latin America.  You all are awesome. I had the best time in Costa Rica and can’t wait to come back.

timandkennethatlanta

Share
Posted in Uncategorized | Leave a comment

Using BMC Control-M to Automate Tasks

My largest client recently purchased BMC Control-M to use as our enterprise scheduler. Since this product went live in our environment I have been watching it displace other schedulers to become the centralized scheduler.  As I have learned more about the capabilities of Control-M as well as becoming more alert to seeing references to the product in the market place I have become a huge fan.

Being a database professional I have become very attached to SQL Server Agent for scheduling anything SQL related.  However I know the limitations of SQL Server Agent when it comes to the SQL task being one of many factors in a much larger process.  Over the years I have seen very complex processes implemented to schedule a multi-process job. In cases where one process (non sql) creates a file that another process (sql job) ingests, manipulates, and outputs additional files that will be used by another process (non sql). Those three jobs have to run in a particular order so the IT admins have to get creative with scheduling.

What I have found with Control-M is that it can schedule each task to run in a workflow so each job becomes a task for a single process flow within Control-M.  The design of the workflows in Control-M is much like SSIS.  The major benefit with letting Control-M execute these processes is the ability for it to monitor each process, send notifications on success or failure and immediately kick off the next process in the workflow after successful completion of the previous task.

As a DBA giving up control over scheduling my jobs was an uneasy feeling, however if you are like me, many times you have a job fail due to a dependency. For me it is usually that a file my job is importing didn’t arrive.  If this job was part of a Control-M process then my job would not have ran due to the dependency failure and whomever is responsible for the file I need would be notified.

Our most recent big success with BMC Control-M for Cloud was installing the module for VMware.  We have virtual servers that need weekly reboots (mostly web servers) and using Control-M for Cloud has automated that process eliminating the need for a VMware Admin or someone in our Network Operations Center from having to issue these reboots.  A good friend of mine blogged about his experience setting that up here >> ControlMWithBigAl.Wordpress.com

 

Share
Posted in BMC Control-M | 5 Comments

The Primary Filegroup Cannot be Backed Up as a File Backup

File Group backups are great when working with very large databases (VLDB’s) that have been partitioned.  Typically when I come across File Group backups implemented in production the database is using the full recovery model. With the full recovery model you can specify individual file groups for backup. A very common backup strategy for VLDB’s is to partition the non-changing data to secondary file groups so that only current data is being written to the primary file group.  This allows for a more efficient backup solution as well as recovery plan.  Performing file group backups allow for a piece meal restore meaning you can bring data back online in order of importance.  Backing up an individual file group can be done using syntax much like:

BACKUP DATABASE DB_NAME FILEGROUP = ‘PRIMARY’ TO DISK = ‘PATH:\DB_NAME.BAK”

This TSQL statement will work if the database is utilizing the full recovery model, but what if the recovery model is in simple? In that case specifying FILEGROUP = ‘PRIMARY’ would not work.  You would get a message that states

Msg 3004, Level 16, State 1, Line 19
The primary filegroup cannot be backed up as a file backup because the database is using the SIMPLE recovery model. Consider taking a partial backup by specifying READ_WRITE_FILEGROUPS.
Msg 3013, Level 16, State 1, Line 1
BACKUP DATABASE is terminating abnormally.

How would you backup the primary file group?

In this case you would need to specify ‘READ_WRITE_FILEGROUPS’ in the statement.

BACKUP DATABASE DB_NAME READ_WRITE_FILEGROUPS TO DISK = ‘PATH\DB_NAME.BAK’

If your secondary file groups are not read-only then they would also be backed up. If you are not writing data to those secondary file groups then you should consider marking them read-only.  There are added benefits to them being read-only in addition to this backup methodology.  You can read more about that here http://technet.microsoft.com/en-us/library/ms190257(v=sql.105).aspx

 

 

Share
Posted in Backups/Recovery, Query Tidbits, SQLServerPedia Syndication | Leave a comment

How to check for last SQL Server backup

As a database professional, I get asked to review the health of database environments very often. When I perform these reviews, one of the many checks I perform is reviewing backup history and making sure that the backup plans in place meet the requirements and service level agreements for the business. I have found a number of backup strategies implemented using full, differential and transaction log backups in some fashion.

In more cases then I would like to share, I have found business critical databases that are not being backed up properly. This could be in the worst case having no backups or a backup strategy that does not meet the recoverability requirement of the business.

When doing an initial check I gather many details about the environment. Regarding backups, I capture things such as recovery model, last full backup, last differential, and the last two transaction log backups. Having this information will allow me to determine what the backup strategy is and point out any recover-ability gaps.

Some examples I have found are 1) no backup’s period, 2) full backup from months ago and daily differentials. In this case the full had been purged from the system, 3) Full backup of user database in Full recovery mode with no transaction log backups, 4) Proper use of weekly full, daily differential, and schedule transaction log backups – however the schedule was set to hourly and the customer expected they would have no more than 15 minutes of data loss.  I am happy to report that I do find proper backup routines that meet the customers’ service level agreement too.

The code I like to use for this check is below.

SELECT
  DISTINCT
        a.Name AS DatabaseName ,
        CONVERT(SYSNAME, DATABASEPROPERTYEX(a.name, 'Recovery')) RecoveryModel ,
        COALESCE(( SELECT   CONVERT(VARCHAR(12), MAX(backup_finish_date), 101)
                   FROM     msdb.dbo.backupset
                   WHERE    database_name = a.name
                            AND type = 'd'
                            AND is_copy_only = '0'
                 ), 'No Full') AS 'Full' ,
        COALESCE(( SELECT   CONVERT(VARCHAR(12), MAX(backup_finish_date), 101)
                   FROM     msdb.dbo.backupset
                   WHERE    database_name = a.name
                            AND type = 'i'
                            AND is_copy_only = '0'
                 ), 'No Diff') AS 'Diff' ,
        COALESCE(( SELECT   CONVERT(VARCHAR(20), MAX(backup_finish_date), 120)
                   FROM     msdb.dbo.backupset
                   WHERE    database_name = a.name
                            AND type = 'l'
                 ), 'No Log') AS 'LastLog' ,
        COALESCE(( SELECT   CONVERT(VARCHAR(20), backup_finish_date, 120)
                   FROM     ( SELECT    ROW_NUMBER() OVER ( ORDER BY backup_finish_date DESC ) AS 'rownum' ,
                                        backup_finish_date
                              FROM      msdb.dbo.backupset
                              WHERE     database_name = a.name
                                        AND type = 'l'
                            ) withrownum
                   WHERE    rownum = 2
                 ), 'No Log') AS 'LastLog2'
FROM    sys.databases a
        LEFT OUTER JOIN msdb.dbo.backupset b ON b.database_name = a.name
WHERE   a.name <> 'tempdb'
        AND a.state_desc = 'online'
GROUP BY a.Name ,
        a.compatibility_level
ORDER BY a.name

Ensuring that you have backups is crucial to any check of a SQL Server instance. In addition to ensuring that backups are being created, validation of those backups is just as important. Backups are only valid if you can restore them.

When I have the opportunity to share my experiences of backup and recovery with people I always like to share about how to backup the tail end of a transaction log and how to attach a transaction log from one database to another in order to backup the tail end of the log. I have created a couple of videos on how to accomplish this that you can view using this like http://www.timradney.com/taillogrestore

Share
Posted in Backups/Recovery, Query Tidbits, SQLServerPedia Syndication | 8 Comments

SSIS Training class by Andy Leonard

linchpin_logo_trans

 

 

Andy Leonard (b | t | l | f) of Linchpin People, has announced a new training class called ‘From Zero to SSIS‘. The first class is being offered in Reston, VA, May 19-23 2014. Andy is one to the top guru’s of SSIS. If you are looking to improve your SSIS skills then you need to attend SSIS Training from the master!

Course Abstract:

“From Zero to SSIS! is designed to train technology professionals in the fine art of using SQL Server Integration Services (SSIS) to build data integration and Extract-Transform-Load (ETL) solutions. The course is focused around labs and emphasizes a hands-on approach.

The target audience for this training is database professionals, application developers, and business intelligence practitioners interested in acquiring or expanding their existing SSIS skill set. No experience with SQL Server or SQL Server Integration Services is required before attending this training. It is helpful (but not required) that students possess some knowledge of and experience with relational databases. SQL Server knowledge/experience will be more helpful than experience and knowledge with other technologies.”

You can find more details here, if you plan to attend the register for the event at here. Why attend an in person class?  Unlike proctored classes, this class is taught by an industry expert on the subject. Hundreds of hours have gone into perfecting this class so that you learn real world knowledge that you can immediately apply to your job.  You get hands on experience and access to a true master.

Share
Posted in Uncategorized | Leave a comment

How to reset the SA password in SQL Server

Getting locked out of a SQL Server can happen in a number of ways.  You could inherit a server that was managed by someone that left the company and only that person had System Admin rights for example. A recent encounter I had was were a database server was built and provisioned in one active directory domain and then moved and joined to another non trusted domain. All the accounts provisioned within the server include those for the DBA admins were basically useless since they couldn’t be authenticated.  The built in SA account is locked down per policy so no one knows the password.

What do you do if you find that you are locked out of a SQL Server instance? The way I handle it is to start SQL Server in single user mode, launch SQLCMD and create a new user. I then still using SQLCMD grant the new user the system admin role.  Once I have the new user created I restart SQL Server Service and log in using the new credentials. I can then clean up the instance by granting proper access.

If this was a newly inherited server I then take the time to do a health check of the instance to make sure that everything is to our standards. (Usually this means more work)

To start SQL Server in single user mode I open a command prompt as an administrator

I type the following minus the quotes (“ “)

“net stop mssqlserver” and press enter

“net start mssqlserver /m” and press enter

“sqlcmd” and press enter

“CREATE LOGIN security WITH PASSWORD = ‘Rec0very123’ “ and press enter

“GO” and press enter

“sp_addsrvrolemember ‘security’, ‘sysadmin’ “ and press enter

“GO” and press enter

“quit” and press enter

“net stop mssqlserver” and press enter

“net start mssqlserver” and press enter

You have now stop and restarted mssql in single user mode, created a new login called ‘security’ and given the user ‘security’ system admin rights. You then stopped and started mssql to put it back in multi user mode. I have included a screen shot of me completing the task on a development workstation so you can see the syntax and steps.

At this point you can connect to the server with SQL Server Management Studio with the new user and reset the SA password and or provision the proper people access.

lockedout

Share
Posted in Uncategorized | 7 Comments