A Brief History | |
---|---|
2) Emergence of the DBMS |
The first DBMS appeared during the 1960's at a time in human history where projects of momentous scale were being contemplated, planned and engineered. Never before had such large datasets been assembled in this new technology. Problems on the floor were identified and solutions were researched and developed - often in realtime.
The DBMS became necessary because the data was far more volatile than had earlier been planned, and because there were still major limiting factors in the costs associated with data storage media. Data grew as a collection, and it also needed to be managed at a detailed transaction by transaction level. In the 1980's all the major vendors of hardware systems large enough to support the evolving needs of evolving computerised record keeping systems of larger organisations, bundled some form of DBMS with their system solution.
The first DBMS species were thus very much vendor specific. IBM as usual led the field, but there were a growing number of competitors and clones whose database solutions offered varying entry points into the bandwagon of computerised record keeping systems.
Through this time, the specific nature of the problems being resolved and worked-around from the perspective of IT Management were evolving with the technology, but by one means or another, the production systems of organisations around the planet ran on year after year. The number of mainframe and mini-computer hardware vendors increased, and the number of peripheral types and their vendors increased. Production sites were becoming a matrix of complex responsibilities. Problem resolution processes in IT Management were growing in complexity, and problem escalation often involved many parties external to the organisation.
The bundling of Database operational services such as the ability to perform and indeed schedule data backups became routine. The IT Operations environments of this time were characterised with a myriad of system housekeeping tasks. At the core of such schedules was (and still is) the organisation's production database (or data structures) backup.
File reorganisations and reindexation of data were also standard, but mainly manual, systems housekeeping tasks. Other tasks included the reinitialisation of storage media, the housekeeping functions associated with the archival and management of any documents and records of the business of the organisation, particularly those related to financial record keeping. The cyclic demands of the business reporting cycle enforced understandable correspondence of IT Operational and Administrative response.
From the production floor of large IT concerns, the IT management had to cover the administration of the hardware, the data and the code. We have examined a little of the nature of the changes in the machinery and of the data structures and their management, but what of the code? How across the decades leading into the 1980's did the database application software change?
Once a database application had been first developed by a software vendor it could then be resold or relicensed to another organisation, perhaps with only minor modification if the organisations were similar. In this way, terrain by terrain, the organisational environment itself began to be mapped out by one or more specific database applications software packages. There emerged competition between application software code developers for the market of supply to organisations in every type of marketing sector imaginable: scientific, government and administrative, and all forms of business markets.
Sufficient to say here that the applications software environment of the 1980's was characterised by screens of data. The more bits of data the organisation needed to maintain or access, the more screens got added to the application, so that it represented a graduated means of providing organisational information specific for the needs of the people within it.
The larger these database application software packages became, the greater became the need to cohesively manage their development, their deployment and their related menu and security access rights administration. Modular development became popular, if not mandatory, for database application software packages which attempted to answer and fullfil the entire spectrum of informational needs of an organisation.
Often, for all types of reasons, organisational needs could not be met with any single available database application software, but only via the integration of a number of packages. That an organisation is required to prepare financial reports and keep an account of various ledgers of operation often implied the integration of a separate financial package or packages (such as Accounts Payable, Accounts Receivable and General Ledger).
IT Management at any one (generic) organisational site often had to manage a number of separate but highly integrated database application software products. These products consisted of modules and sub-modules and at the atomic level some finite series of application software objects which were specifically written for some form of specific existent DBMS. Whenever the underlying DBMS was upgraded, and this was often associated with the release of new vendor machine operating system software, it was often mandatory that changes were made to each of this underlying series of application software programs.
Conversely, where new functionality or enhancements were released in the progression of vendor specific DBMS, it was only by some form of corresponding changes made in the database applications code that the users of the entire ensemble of systems could realise any substantial advantage of these enhancements.
As you can clearly see, even though the above list could have been (and in fact was) compiled in the 1980's, there is nothing much really changed about the environment. If anything, as we shall see later, the environment is becoming far more complex.
Essentially, it is the task of the IT Management at any one specific organisation to manage any problems arising out of the above set of operating environments, in as much as these problems relate to the delivery of accurate information from the DBMS to the end users in a production, or an R&D site.
Of course, the main aim of IT Management is to effectively harness the potential of utilising the technology available to automate all tasks (capable of being automated) within the operational bounds of any one organisation. But such a milestone (ie: total automation) requires a number of precursor milestones, and one of these is operational stability in as many of the above environments as possible.
Data integrity is one of the corner stones of IT Management practice, for the ultimate responsiblity for data integrity rests not with any of the external hardware or software or systems vendors. It is an issue which needs to be addressed in a proactive fashion. It is better to take the problems to the database and check for the existence of any previously identified types of data integrity failures, than to allow such errors to build up and multiply.
This proactive practice is characteristic of endeavors where failure cannot be repeatedly countenanced, and where there is much at stake riding on the integrity of the data itself. Quality assurance of the organisational data should be a necessary part of the organisational intelligence (or rules).
During the 1970's and 1980's and 1990's the term Relational Database received an increasingly widespread attention, both in theory and in practice. Relational databases were able to be created so that many types of previously identified database integrity exceptions could no longer arise. As we shall see in the next article, such advances were indentified as being required by computing theory in the 1970's.