I’ve come to learn that there are some really flaky systems out there. Not just a few quirks here and there, but significant problems that someone should have caught.
For example, I spent some time earlier this week consolidating two SQL Server machines into one. Through this process, I had to evaluate the vendor app that interfaced with one of these databases. In examining the configuration, I found that every user of the system was connecting through a static SQL user account – not a good idea, considering the merits of using Windows authentication, but not a critical problem. However, what was a dealbreaker was that this common account used by all had been given db_owner priveliges. Furthermore, the user ID and password of this db_owner account were stored in a configuration file on each machine!
The most appalling part of this was my call to the vendor. I explained what I had found, and the account rep insisted that it had to be configured this way for the application to work properly. More frightening, he didn’t consider this setup a problem at all. Despite this advice, I ended up doing some experimentation using Windows authentication and a diminished level of access. Contrary to what the account rep told me, this setup worked just fine.
Now what I’m wondering is, at what step in the development project did it seem to be a good idea to allow anyone db_owner rights – and to put that login data directly on the client machine? I will admit to taking some shortcuts during system development, but generally during the proof-of-concept phase. How such a significant oversight like this could get past developers, DBAs and QA is beyond me.
Now had there been a compromise, I’m sure the software vendor would have been questioned about it, but we on the front lines would have borne the bulk of the mess. The point is, we must always be alert to poorly designed systems – even if it’s someone else’s poor design.
Be the first to comment on "Be on the lookout for the mistakes of others"