4 Things I Wish I Knew Sooner

I was recently called out by Tim Costello on a blog series started by Mike Walsh.  http://www.interworks.com/blogs/tcostello/2014/08/27/4-things-i-wish-i-knew-sooner-bi-consultant

The series calls for each blogger to list 4 things they wish they had known sooner in their career as a data geek.

1) Knowing how to query data != DBA. You will be insulted. There is a tremendous amount to learn and most technologist have no idea how much they don’t know about your field. What I found after becoming a data professional is that in the world of SQL you can take many different paths for your data career such as TSQL developer, DBA, report writer, ETL, BI, BA and more. What you will find is a run of the mill technologist that can write a select statement will also call them self a DBA. Let it go, don’t bother arguing with them.

2) Know Recovery – I was shown how to make sure we have backups early in my new role as a DBA, quickly after that we had 100% turnover (roughly). Soon after that came a time when we had to restore a database to a point in time. That process took much longer than it should have.  Know your backups and know your restore processes. Practice them often.

3) Your Enthusiasm will not become contagious – You might become very enthusiastic about your DBA related career. You may find yourself attending free online training, going to user group meetings, attending SQL Saturdays or any other number of events to boost your career. Others in your work place may simply view their job as a job. Not a career like you. You can encourage them to join you but don’t let their lack of self learning impact you. Also as you become more successful, don’t let their jealousy impact your desire to learn and be the best person you can be.

4) Pace yourself- You will have to learn to balance your work life and home life. This also includes your study time. Work life balance is very important, not only from your employer working you to hard, but also your time with self learning, blogging, reading, etc. Don’t miss out on those special family moments, but also don’t miss out on a few sacrifices to ensure financial stability. It is a balance after all.


Posted in Uncategorized | 2 Comments

An error occurred while executing batch. Error message is: Arithmetic Overflow

Recently an incident came across my desk where an end user was receiving an error trying to retrieve some records from the database. The error message the customer received was “An error occurred while executing batch. Error message is: Arithmetic Overflow”.

Experience has taught me that this message is related to an invalid dataset within a record or invalid column size. In this particular event the column was FLOAT and a data entry contain characters and symbols. Obviously an error in user entry or an import.

The issue that comes up is how can you query the data if it continues to give this error each time you try to query a set of data that contains that record. The easiest way is to convert the column to another data type. In my case I converted the FLOAT data type to NVARCHAR.

WHERE Value = ‘ABC123′

I am then able to query the dataset to my range of data to see the invalid record. I can then run an update statement to correct the values or null them out.

Posted in Query Tidbits, SQLServerPedia Syndication | Leave a comment

Microsoft SQL Server “Most Valuable Professional” MVP << I am one #mvpbuzz


Today I received an email notifying me that I have been awarded the Microsoft “MVP” award. An excerpt from Microsoft’s website states

The Microsoft Most Valuable Professional (MVP) Award is our way of saying thank you to exceptional, independent community leaders who share their passion, technical expertise, and real-world knowledge of Microsoft products with others.”  reference

It is a huge honor to be given this award just as it has been extremely humbling to be nominated for this award by friends within the SQL Server community. I feel very lucky to be part of the awesome SQL Server community, to have made friends all over the globe and now to be part of the MVP community.

Little did I know how much my life was going to change when on 10/28/2008 my boss told me that our training budget got reinstated and did I want to go to the PASS Summit. I quickly registered and when I arrived in Seattle and felt the energy of our community I was hooked. During that Summit I was attending a session on DMV’s when Grant Fritchey shared with the audience about SQL Saturday’s.  I took note of it, attended the next two in the South East USA and then started speaking at them.

My passion and desire for learning never let up. I have attended every PASS Summit since, attended or spoke at dozens of events the past few years, contributed on forums, written blogs, been a PASS Regional Mentor, Chapter Leader, PASS Big Brother, etc.  As much as I try to give back I still feel like I am receiving more from this community then I can contribute. The friendships I have made and the lives I have been able to touch make all the time and energy put into it worth it.

There are so many others deserving of this award world wide. If you know someone who consistently gives of themselves back to the Microsoft community please take time to nominate them today.

Posted in Presentations, SQLServerPedia Syndication | Tagged , , | 5 Comments

I am speaking at the PASS Summit 2014

I am super excited to announce that I have been selected to speak at the 2014 PASS Summit. This is considered the Super Bowl of SQL Server Conferences. It is the worlds largest gathering of SQL minded nerds. Consider it a SQL Server Nerd Herd.

Kidding aside it is the best opportunity to learn as much as possible about SQL Server. There will be over 170 different sessions to pick from over the course of three days as well as the option to attend two days worth of precons. Attending my first PASS Summit in 2008 changed my life and I have not missed one since.

I am very excited that the PASS Summit will be back in Seattle this year and that I get to be counted among the speaker pool.


Posted in SQLServerPedia Syndication, Training/Events | 2 Comments

SQL Server High Availability and Disaster Recovery Options

Working with Microsoft SQL Server for many years I have spent a lot of time discussing the importance of the availability of SQL Server databases.  Questions that always come up when discussing availability of the data is “Recovery Time Objective – RTO” and “Recovery Point Objective – RPO”.  Both questions are very important when determining your solution for high availability (HA) within your data center as well as your solution for disaster recovery (DR).

Microsoft SQL Server gives us several options for mitigating potential risk for our SQL Server environments.  Each solution has its pros and cons and careful consideration should go into your solution.  Before you can build a solution you have to have requirements on what risk you are trying to protect against. Some items you might want to protect against are OS failure, hardware failure, data corruption, or a data center failure. We have different options to help mitigate these potential failures and each solution comes with a certain cost and level of complexity.  For organizations, they have to weigh the cost of the solution and complexity to manage it against the actual risk. I like to say that it comes down to a math problem that usually involves a budget. What are some technologies we typically see implemented to address HA/DR with SQL Server? Below you will see a chart I like to use that demonstrates some of the pros and cons of Log Shipping, Database Mirroring, Replication, Windows Failover Clusters, Availability Groups, and Virtualization (not a SQL technology) HA_DR_Options

Many times when discussing HA and DR people tend to confuse or mix the two.  HA is a system designed that allows for minimal downtime, typically this is for protection from an OS or hardware failure.  DR is risk avoidance on a much larger scale. When discussing DR we typically cover risk management, RPO, RTO and build a disaster recovery plan. DR typically involves a second data center whereas HA is typically building redundancy within your data center.

As you can see from the list above, all but virtualization provide both hardware and OS level protection.  Log Shipping is the only solution that provides a load delay in synchronizing data. This is a very important feature that can help you protect against an accidental data oops.  Imagine ingesting bad that would require you to restore a database, or have an accident where an update/delete statement was ran without a where clause.  If you were using replication, mirroring or any other HA solution that provides near real time replication, those transactions would also be applied against your replica.  If you had a load delay of 12 to 24 hours you could roll the logs to just before the accident and be back online much quicker than restoring the database.

As previously stated, in order to know which solution is best for you, you really have to know what your requirements are. For me, I typically use a combination of most of the solutions depending on my environment. I have a combination of log shipping, failover clustering, availability groups and virtualization in place. For very large critical environments, a log shipped secondary provides a nice level of comfort knowing I can bring a multi terabyte database back online in minutes in the event I have a data issue.




Posted in Backups/Recovery, SQLServerPedia Syndication | 1 Comment

Know Backups and Know Restores – SQL Server Restores

A couple of years ago I put together a training session to demonstrate various types of backups and restores. The driving factor behind me building this session was the response I got from talking to a dozen or so DBA’s about disaster recovery. I asked how many have ever performed a restore where you had to apply transaction logs.  I expected everyone to raise their hand but I only had one person.  This was from a group where everyone was responsible for backup and recovery of their environment.

With something so simple I realized there is a big need for demonstrating these actions so that at least everyone is familiar with the process. I had to spice things up since performing backups and restores are not all that sexy. In my presentation I demonstrate traditional backups using full and log, then full with differential and logs, a file group level backup with piecemeal restore and finally tail log backups.

With the tail log backup I backup the tail end of the transaction log after deleting the data file as well as attaching a transaction log from a database on one virtual machine to a database on another virtual machine and still back up the tail end of the log.  This is always well received.

During my presentation I demo a set of scripts that I use to generate the restore script. I share that many have adapted these scripts to their production routines to generate their restore script after every backup. This helps tremendously for restore validations as well as production restores and could also be leveraged in your DR procedures.

I have included a download of my sample databases, scripts and presentation here >>> KnowBackups


Posted in Backups/Recovery | Tagged , , , | 1 Comment

Making a Difference – #SQLFamily

When I started with the SQL Community I was first encourage to start blogging which I started in Sept of 2010. While at the PASS Summit in 2010 I was encourage to submit to speak at a SQL Saturday. I submitted my first ever session for SQL Saturday #62 and got selected to speak. This event was held on Jan 15th 2011.  Thank you to Jose Chinchilla for encouraging me to submit. (He didn’t really give me a choice)

It has been an incredible journey since then with speaking at events in numerous states within the US and multiple countries.  My most recent out of the country trip was to Costa Rica to speak at SQL Saturday 282.  I am very fortunate to have a good friend who lives there who encouraged me to come visit.  I met Kenneth a couple of years prior at a SQL Skills class in Tampa Florida.  A nice thing that SQL Skills does is one night during each week of training they have a “free for all” where the attendees can get up and present a short talk. I took advantage of this and gave a trimmed down version of my session “Getting Involved and Getting Ahead” where I share my journey with volunteering in the SQL Community.

Little did I know that giving this talk would have such a profound impact on someone in the audience. This is where I met my now good friend Kenneth Urena. After my talk Kenneth approached me with lots of questions on how to do this himself. I shared everything I knew. I talked about local PASS Chapters, virtual chapters, SQL Saturdays, etc.  We connected with each other on social media so we could stay in touch and we have.

While at Summit this past year in Charlotte NC we made a deal, if I come to Costa Rica and present a session that Kenneth would come to Atlanta and present.  We both held up our end of the agreement however what I did not realize is that it would be the first time my friend presented a technical session in English.  I did not realize this until an hour before his session. Kenneth sent me a text to come help him review his slides during lunch (his session was right after lunch). When I arrived he was very nervous but we went through his material and he was awesome.  His language was excellent, his slides were great and he knew the material very well. My only advice was to take things slow, not to rush and speak clearly. I hope this helped to calm his nerves because when he got to his room it was packed.

You should read Kenneth’s account of how he came to start speaking and how his session went in Atlanta. The interaction with the room monitor is worth reading the post. http://sqlcr.com/content/my-first-sql-saturday-english-speaker-few-years-journey

Kenneth, thank you for giving back to the SQL Community and helping to extend the #SQLFamily to Latin America.  You all are awesome. I had the best time in Costa Rica and can’t wait to come back.


Posted in Uncategorized | Leave a comment

Using BMC Control-M to Automate Tasks

My largest client recently purchased BMC Control-M to use as our enterprise scheduler. Since this product went live in our environment I have been watching it displace other schedulers to become the centralized scheduler.  As I have learned more about the capabilities of Control-M as well as becoming more alert to seeing references to the product in the market place I have become a huge fan.

Being a database professional I have become very attached to SQL Server Agent for scheduling anything SQL related.  However I know the limitations of SQL Server Agent when it comes to the SQL task being one of many factors in a much larger process.  Over the years I have seen very complex processes implemented to schedule a multi-process job. In cases where one process (non sql) creates a file that another process (sql job) ingests, manipulates, and outputs additional files that will be used by another process (non sql). Those three jobs have to run in a particular order so the IT admins have to get creative with scheduling.

What I have found with Control-M is that it can schedule each task to run in a workflow so each job becomes a task for a single process flow within Control-M.  The design of the workflows in Control-M is much like SSIS.  The major benefit with letting Control-M execute these processes is the ability for it to monitor each process, send notifications on success or failure and immediately kick off the next process in the workflow after successful completion of the previous task.

As a DBA giving up control over scheduling my jobs was an uneasy feeling, however if you are like me, many times you have a job fail due to a dependency. For me it is usually that a file my job is importing didn’t arrive.  If this job was part of a Control-M process then my job would not have ran due to the dependency failure and whomever is responsible for the file I need would be notified.

Our most recent big success with BMC Control-M for Cloud was installing the module for VMware.  We have virtual servers that need weekly reboots (mostly web servers) and using Control-M for Cloud has automated that process eliminating the need for a VMware Admin or someone in our Network Operations Center from having to issue these reboots.  A good friend of mine blogged about his experience setting that up here >> ControlMWithBigAl.Wordpress.com


Posted in BMC Control-M | 5 Comments

The Primary Filegroup Cannot be Backed Up as a File Backup

File Group backups are great when working with very large databases (VLDB’s) that have been partitioned.  Typically when I come across File Group backups implemented in production the database is using the full recovery model. With the full recovery model you can specify individual file groups for backup. A very common backup strategy for VLDB’s is to partition the non-changing data to secondary file groups so that only current data is being written to the primary file group.  This allows for a more efficient backup solution as well as recovery plan.  Performing file group backups allow for a piece meal restore meaning you can bring data back online in order of importance.  Backing up an individual file group can be done using syntax much like:


This TSQL statement will work if the database is utilizing the full recovery model, but what if the recovery model is in simple? In that case specifying FILEGROUP = ‘PRIMARY’ would not work.  You would get a message that states

Msg 3004, Level 16, State 1, Line 19
The primary filegroup cannot be backed up as a file backup because the database is using the SIMPLE recovery model. Consider taking a partial backup by specifying READ_WRITE_FILEGROUPS.
Msg 3013, Level 16, State 1, Line 1
BACKUP DATABASE is terminating abnormally.

How would you backup the primary file group?

In this case you would need to specify ‘READ_WRITE_FILEGROUPS’ in the statement.


If your secondary file groups are not read-only then they would also be backed up. If you are not writing data to those secondary file groups then you should consider marking them read-only.  There are added benefits to them being read-only in addition to this backup methodology.  You can read more about that here http://technet.microsoft.com/en-us/library/ms190257(v=sql.105).aspx



Posted in Backups/Recovery, Query Tidbits, SQLServerPedia Syndication | Leave a comment

How to check for last SQL Server backup

As a database professional, I get asked to review the health of database environments very often. When I perform these reviews, one of the many checks I perform is reviewing backup history and making sure that the backup plans in place meet the requirements and service level agreements for the business. I have found a number of backup strategies implemented using full, differential and transaction log backups in some fashion.

In more cases then I would like to share, I have found business critical databases that are not being backed up properly. This could be in the worst case having no backups or a backup strategy that does not meet the recoverability requirement of the business.

When doing an initial check I gather many details about the environment. Regarding backups, I capture things such as recovery model, last full backup, last differential, and the last two transaction log backups. Having this information will allow me to determine what the backup strategy is and point out any recover-ability gaps.

Some examples I have found are 1) no backup’s period, 2) full backup from months ago and daily differentials. In this case the full had been purged from the system, 3) Full backup of user database in Full recovery mode with no transaction log backups, 4) Proper use of weekly full, daily differential, and schedule transaction log backups – however the schedule was set to hourly and the customer expected they would have no more than 15 minutes of data loss.  I am happy to report that I do find proper backup routines that meet the customers’ service level agreement too.

The code I like to use for this check is below.

        a.Name AS DatabaseName ,
        CONVERT(SYSNAME, DATABASEPROPERTYEX(a.name, 'Recovery')) RecoveryModel ,
        COALESCE(( SELECT   CONVERT(VARCHAR(12), MAX(backup_finish_date), 101)
                   FROM     msdb.dbo.backupset
                   WHERE    database_name = a.name
                            AND type = 'd'
                            AND is_copy_only = '0'
                 ), 'No Full') AS 'Full' ,
        COALESCE(( SELECT   CONVERT(VARCHAR(12), MAX(backup_finish_date), 101)
                   FROM     msdb.dbo.backupset
                   WHERE    database_name = a.name
                            AND type = 'i'
                            AND is_copy_only = '0'
                 ), 'No Diff') AS 'Diff' ,
        COALESCE(( SELECT   CONVERT(VARCHAR(20), MAX(backup_finish_date), 120)
                   FROM     msdb.dbo.backupset
                   WHERE    database_name = a.name
                            AND type = 'l'
                 ), 'No Log') AS 'LastLog' ,
        COALESCE(( SELECT   CONVERT(VARCHAR(20), backup_finish_date, 120)
                   FROM     ( SELECT    ROW_NUMBER() OVER ( ORDER BY backup_finish_date DESC ) AS 'rownum' ,
                              FROM      msdb.dbo.backupset
                              WHERE     database_name = a.name
                                        AND type = 'l'
                            ) withrownum
                   WHERE    rownum = 2
                 ), 'No Log') AS 'LastLog2'
FROM    sys.databases a
        LEFT OUTER JOIN msdb.dbo.backupset b ON b.database_name = a.name
WHERE   a.name <> 'tempdb'
        AND a.state_desc = 'online'
GROUP BY a.Name ,
ORDER BY a.name

Ensuring that you have backups is crucial to any check of a SQL Server instance. In addition to ensuring that backups are being created, validation of those backups is just as important. Backups are only valid if you can restore them.

When I have the opportunity to share my experiences of backup and recovery with people I always like to share about how to backup the tail end of a transaction log and how to attach a transaction log from one database to another in order to backup the tail end of the log. I have created a couple of videos on how to accomplish this that you can view using this like http://www.timradney.com/taillogrestore

Posted in Backups/Recovery, Query Tidbits, SQLServerPedia Syndication | 8 Comments