As the data is completely flexible and broken down to individual fields / entities, into the smallest possible units, the same way the software system elements must also be broken down to the smallest possible units, modules, and must be equally interconnected, allowing the choice of components for the system users. Instead of monolithic systems the modularity is a must! The above described data model needs distributed systems, where the participants are defined in a flexible authentication and authorization system, and flexibly defined control parameters will determine the workflows and the connections within the system for all participants. This approach allows a very unique complex setup for each co-worker, without programmers’ intervention, by setting up the relevant parameters within the parameter setups. The machine/software needs to be able to retrieve the information about the ever changing context, where the user (librarian and patron alike) comes from, and according to this information the next step can be derived automatically and the user can be supported in his work with the support of automated and fluent workflows.
There is a huge potential in sharing the software with the community. The tendency of the modern era is the usage of open source systems, where the community can support the development of certain functions. The smaller the modules are, the biggest is the chance that the modules can be interconnected in a very flexible and fruitful way. By setting the commonly accepted rules between the modules and the communication infrastructure, the freedom of choice is guaranteed: variations of the same module can be produced and made available, and existing modules can be customized if needed. The ideal solution is of course, if the processes are driven within flexible modules by control parameters and setups, without having separate modules for individual variations.