I was tagged by Gail Shaw to post two big mistakes made during my professional career. The only challenge here was to narrow the list down to two 🙂
The first one is very easy. During my early days working with SQL Server, I wore a lot of hats, including that of web developer. One of my first major web projects was to create a student assessment system, which would allow instructors to create online exams. Students would then be able to use the web interface to take the exams, and the manual paper grading process would be no longer. Now I’m not ashamed to confess that I was underqualified at the time to effectively complete this process; as the sole developer/DBA, the entire project from spec to support rested on my inexperienced shoulders. Nevertheless, I forged on and delivered the application on time, albeit untested. The magic hour was the following morning, when a dozen educators were to begin entering exams on the new application. I should also mention that I was still in college at the time, and was in class – over an hour’s drive away – during the critical go-live.
It probably goes without saying that my phone started ringing shortly after the first staff members arrived. Problems were rampant, and I ended up leaving class to go address the issues. I dodged the angry mob at the front door and managed to get in and take care of the most pressing issues so the test building could commence. In the end, the application was made usable and found a niche where it worked pretty well. However, I can’t help but wonder if this tool wouldn’t have gained more widespread acceptance if I had been more experienced at the time and had done a better job during development and deployment.
Lesson learned: Admit when you’re in over your head, and insist upon a thorough testing cycle.
The second one caused me a good deal of embarrassment and cost me the better part of a day. After receiving a report from an end user that a critical report had not been run on one of our main databases, I got with our hardware guys to arrange for a restore of the database backup file from tape. Since this is a large database, it takes a few hours to copy over, but our backup guru agreed to copy the file directly to the development server to save another copy operation from live to dev. I got the call a few hours later that the copy was complete, but I found only old files on the target directory (and deleted some of them, as part of a periodic manual cleanup). I called our backup guy again and told him something had gone wrong and the file hadn’t been copied. Always a good sport, he kicked off the file restore again. When the call came that the process was again completed, I checked the backup directory and still found only old files, including one I had deleted earlier. I made another call to our backup guy to find out what was wrong with the backup software, and simultaneously opened a window to the live database server backup folder. As I was explaining to our backup engineer that he had made a mistake, I saw the filenames in the backup directory on the live server – which looked curiously like the files I had deleted! The database backups on the live server had a different naming convention than those on the dev side, and I had recklessly deleted the restored file the first time. A quick RESTORE HEADERONLY confirmed that I had just wasted a good part of my day, as well as that of one of our best hardware guys.
Lesson learned: Before you assume someone/something else is at fault, make sure you’re not doing something silly to cause the problem in the first place.
Be the first to comment on "SQL Quiz: Two Mistakes"