Search This Blog

Wednesday, January 17, 2024

Best Practice Tip #1: Beware of Best Practices

Best Practice Tip #1: Beware of Best Practices

It may sound contractionary that the first item of a best practices list is to be wary of best practices. When you hear a piece of 'best practice' advice, it's wise to take a step back and think it through before jumping in.

Now, it's true that not every system is completely unique – there are times when systems are pretty much carbon copies of each other. But, each organization usually has its own set of specific needs and ways of doing things. There is no one-size fits for all, all the time. So, even if a piece of advice sounds good on paper, it's important to see if it really fits your particular situation.

Also, be wary of best practices that might be out of date. This includes tips for older versions of SQL Server or from the days when 8 GB of RAM was a lot. When you're searching online, you might run into advice that’s been hanging around for a while. Plus, a strategy that worked wonders for your system in the past might not be the best choice now if things have changed. And don’t forget, even if a tip seems like it'll boost your system, it could have downsides, like the risk of messing up things you've already done, which could be a major issue.


A Detailed Example:


Imagine you're managing your company's databases. You find this often-suggested tip: 'Always update to the latest database software for top performance and security.' It sounds sensible, and in fact, at my current job, it’s almost a given to keep everything up-to-date.

But here’s the catch: just because it's latest and greatest version doesn't mean it's automatically the right choice for you. It’s all about weighing the good against the potential hiccups. New versions might and often do offer cool features, but will they integrate smoothly with your current setup? Could they cause unexpected problems? It’s about finding the right balance for your specific needs. 


Here are few of the points to consider:


  • You have a database version that is stable and has been customized to suit your company's unique needs. Yes, this violates the best practice that don't change the default settings or deploy solutions that will get overwritten if you ever upgrade the version. But, here you are. And the latest database version offers new features, but it's unknown how these will interact with your customizations.
  • The best practices often do become outdated. Research if the advice is based on the latest information or if it was more relevant to older versions of the database software.
  • Analyze how the upgrade might affect your current workload and system performance.
  • Understand the potential risks, such as incompatibility with existing applications or downtime during the upgrade process.
  • While the upgrade might offer improved performance and security, it could also introduce new challenges or instability.
  • The risk of disrupting your current stable environment might outweigh the benefits of the new features in the latest version.


In any of the above circumstances, blindly following the best practice of upgrading to the latest software version without considering these factors could lead to unforeseen complications. 

Also, always test, test and test and have more than one back-out/rollback options ready in case things don't go as planned.

Here are some more examples:

  • Password Management: The old security best practice was to change passwords frequently, like every 90 days. This advise is now being questioned as it often leads to weaker passwords and password fatigue. The modern approach emphasizes strong, unique passwords possibly managed through a password manager, and the use of multi-factor authentication. MFA is already available in Azure SQL Server and, you could achieve this even in on-premise SQL Servers.
  • CPU, Memory and Storage Solutions: The use of physical servers for data storage and processing was a common practice. In today's cloud computing world, memory (both RAM and disk) and CPUs are not only cheap, they are dynamically available, and transient – therefore, not even a long-term investment if you don't end up needing them in future. 
  • Denormalization for Performance: In the past, full denormalization was a common practice to improve database performance, especially in data warehousing systems with limited resources. Nowadays, some or more degree of normalization is preferred. And I am aware, here I am dangerously close to long running debate/feud between the Team Star Schema Vs Team Snow-Flake Schema.
  • Vertical Scaling: The traditional approach to handling increased load or data volume was to scale up (vertical scaling) by upgrading the existing server's hardware (more CPU, RAM, storage). Modern databases often favor horizontal scaling (scaling out), which involves adding more nodes to the system (e.g., AlwaysOn Availability Groups), offering better fault tolerance and scalability.
  • Full Database Backups: In very distant past, relying solely on full database backups was common and/or the only option available, not anymore. While still important, it is inefficient for VLDBs. Incremental and differential backups are now frequently used alongside full backups to minimize data loss and recovery time.
  • Manual Tuning and Maintenance: Earlier, database administrators (DBAs) spent considerable time manually tuning queries and maintaining databases. Today, many databases incorporate automated performance tuning, real-time monitoring, and AI-driven optimization, reducing the need for manual intervention.


These examples illustrate how database management best practices have evolved and will continue to do so.