Making sure your systems are correctly configured is part of running the latest software in an optimal way (The assumption being the latest software is the most useful to you!).
For a software package to run, we need:
Controlling this is called configuration management, and this can be very simple on a single desktop or much more complicated with multiple embedded systems.
There are few questions you can ask yourself (or your developer) which will tell you whether you have a good design and process for configuration management:
Let’s unpack each one.
Let’s say your hard drive dies tomorrow? What is involved in getting the system running again?
A better process will make this as simple as possible.
The ideal case is a single button press, and the system is running again.
For complex setups, that may be hard (a.k.a expensive) for you to put in place. In many cases, though, just putting some thought into this upfront can make it very easy.
If you are making multiple replicas (such as a product) or the cost of downtime is high, it is worth investing in tools to help.
This confidence is closely related to the first question but subtly different.
A good process will have consistency between systems, and you can typically achieve this by removing the potential for human error through automation.
The field of configuration management exploded from cloud computing where there were issues with “snowflake” servers. When each server was set up by hand, they were each unique. Being unique makes servers harder to manage and slower to recover in case of a fault.
If your system in test cell 1 requires a slightly different setup procedure to the system in test cell 2, then you know have two unique systems to maintain and consider.
However, if both tests cells are setup in the same way you have a single setup to consider when debugging or replicating them. This probably means less manual tweaking.
The other benefit is you have some redundancy. If you have critical testing going on test cell 1 and the PC dies – move the PC from test cell 2 and you are running again.
The dreaded “it worked yesterday” problem and it is mainly about control over system changes.
I’ll give you a real-world example I’ve faced.
I had a piece of software that uses Excel to generate a report. One day – the report generation just started crashing.
It turns out, an excel update in the background had broken the interface. Control is the keyword here. The problem was the Excel updates are controlled by IT (well me as well in my case, but that is a bit unique) and engineering manages the system.
Generally, you have three routes for being able to answer yes to this question:
Of course, the effort you need to put into this depends on the costs and risks associated with downtime of your systems and the staff required to recover them.
Hopefully, those questions help you to identify the risk that you may have in your system though, and I plan to follow this up with some more technical examples of how to make your system more robust to configuration changes. Make sure to sign up/follow Wiresmith Technology to be notified when they are released.
Part 1: Why Plan For Keeping Measurement Systems Up To Date
Part 2: How Do I Keep My Windows Systems Up To Date
Part 3: How Do I Keep My Real-Time Systems Up To Date
Part 4: Writing Software to Support Easier Configuration Management
By continuing to use the site, you agree to the use of cookies. more information
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.