The Holy Grail of application development
has always been to develop applications faster, with fewer bugs
and provide adequate documentation throughout the entire process.
To produce a piece of software with a feature mix that satisfies
the clients needs for the lowest cost in time and dollars.
The steps of application development are:
1. Identify the environment of the application
2. Catalog the requirements and expectations
3. Build the application
4. Test the application
5. Document the application
6. Train the users
7. Convert and / or load legacy data
8. Install the application
9. Support the application
The problem is that application development
is not linear. It is a circular, evolutionary process, but budgets
and delivery schedules, which are linear, are set early and need
to be met for serious business reasons. A quandary develops when
undiscovered information comes to light in the development process
that was not available earlier.
Application development as a discipline has
existed for only about 40 years with just the last 10 years or
so seeing the kinds of pressures we are familiar with today. It
is an infantile discipline in the grand scheme of business history.
During that time, the tools and environment have changed dramatically.
Add to that the accelerated level of complexity and it is no wonder
there are so many poorly implemented or failed projects to fill
the trade papers with horror stories.
The fact that this is not a linear process
is where the dilemma arises. As discoveries are made during the
process, the decision to go back and re-engineer based on the
new knowledge is a difficult one. It is costly in both time and
dollars. Missed milestones are difficult to justify and can be
critical in todays fast paced business environment. The
ability to gather more information, earlier in the process is
a valuable asset. A stable application cannot be built on what
is not known. No tool has been available to collect the milieu
of information that is needed in todays application development
process. This collection of information is very dynamic, involves
many people and is subject to constant change through the life
of the project. Also, in the interest of maintaining schedules,
assumptions are made by the development community as well as the
user community that each understands what the other is articulating.
The fallacy of this assumption is usually discovered somewhere
during the implementation portion of the project.
To this end, OrganizeIt!
was created. Two individuals or groups working on a project who
think they understand each other, but discover subtle differences
in perceptions during the process are on very dangerous ground.
Users assume that developers understand the subtleties of the
business process that they themselves have often not even documented.
Application developers have a propensity for assuming they understand
what they hear and take it in the simplest of terms. Many of the
developers in the IT community are relatively new to the process
and lack business and life experience. This limited scope leads
to assumptions that later prove to be fatal to a project. Looking
at basic human interaction, social culture, and various agreements
between people, we find a pattern of communication that is severely
lacking in the application development world. That is ALL language
used in the process must be clearly understood by all that are
involved in the process. OrganizeIt! is a tool that facilitates
this process. Any words or terms that are used must be defined
and that definition agreed to by all involved in the project.
All words that are not understood in the common vernacular or
that have domain specific meanings must be clearly defined in
the context of the project. The ability to maintain and publish
that glossary of terms is critical to the success of any project.
Words tend to accumulate loaded meanings in domain specific use,
and unless that domain specific knowledge is clear, errors will
occur in the development process. In todays world of staff
changes during the project life cycle, valuable information is
lost with the transition of personnel. The OrganizeIt! repository
becomes the keeper of this information as it evolves and the team
changes. This information can be instantly published to rich text
format documents or HTML for distribution in printed, e-mail attachment
or web based form. At the conclusion of a meeting, the information
can be available to all members of the project, world wide in
a matter of minutes. This ability has never existed before! Application
requirements are built on the language that has been defined.
If a word crops up in a requirement that has not been defined;
it must be defined and accepted by all segments of the project
team. Requirements are statements that define what the application
must do. As requirements are defined, business rules surface.
Business rules define how the application does what it does. They
define the logic behind decisions. Business rules cannot be articulated
unless requirements are clear and requirements must have a defined
language in which to live. These three very simple steps are very
difficult to implement but pay off greatly during the development
process. Being well into a project and discovering that a term
was not clearly understood or omitted from the discussion will
alter requirements, which will have impact on business rules.
Retrofitting this new information into the existing architecture
may be difficult, expensive in time and dollars or maybe impossible.
A universally available repository such as OrganizeIt! makes available
an earlier horizon than ever before on the information to users,
developers, marketers, technical writers, trainers and help desk
staff. With the low cost licensing of OrganizeIt! this information
lives on after implementation with the client. OrganizeIt! addresses
the first, second, fifth and ninth facets of the development process
The next requirement of the development process
is to actually build the application. With more information gathered
earlier, we have a clearer target to go after. We deal with the
building of an application in a clear, well-defined process. All
aspects of the application architecture are defined and documented.
With over 40 years of full time application development experience
with small to large multi-national firms, we have the benefit
of a broad spectrum of applications. We have developed a three
level approach to this process. First there are components that
are used by most applications. We have built these as reusable
(and we DO reuse them) components that have behavior that is controlled
by the calling application. Therefore, we don't re-create them
for every application. They are under source code control to allow
legacy applications to be moved forward if desired as new features
are created without breaking the interface contract. This includes
a scheme for client specific information, requirements and licensing
specifications to be automatically handled. All security, data
filtering and much of the printing is integrated into the underlying
Genesis Architecture as well. Because of the high degree of abstraction
and consistent use of patterns, a large amount of the services
needed are supplied by standard components. These components typically
make up about 50% of the code in an application. These components
understand how to work together and have a clearly defined dependency
The second tool in our arsenal has been developed
under the code name Genesis. This project
was started back in 1993 using Visual Basic (VB) 3.0. The concept
had been in use by our development team since 1976 during the
advent of the microcomputer era and on WANG mini computers. But
with VB 3.0 and the release of the JET database engine, we were
in a place to exploit the ability to use patterns. Using User
Defined Types (UDTs) this methodology was developed and
used in several very large projects. We tied the definition of
the UDT with structured subroutines and took the first step towards
automated, pattern based code generation based on a data model.
One implementation was a VB 3.0 project using a JET (Access 1.1)
database with over 40 simultaneous connections and record counts
reaching the hundreds of thousands and the application ran successfully
for several years. They said it couldnt be done, but we
did it. With the release of VB 4.0 and the introduction of OO
development within the VB environment, a new era dawned. Genesis
has been migrated along with the releases of VB and is now implemented
in VB 6.0. The notion of placing the object definition (properties)
and all of the necessary logic (methods and events) in one piece
of code and being able to generate that code from a developer
configurable interface launched the era of intelligent objects.
Based on the notion that much of the code written for an application
is very repetitive, or at least should be, we loaded these patterns
into a Code Inference Engine and allow application specific configuration
of the options at the time of creation. Genesis supports ADO as
well as DAO code generation. If the needs of the application change
or the data model change (seldom happens? ya, right!),
we simply update the data model in its native environment (Access,
SyBase, Oracle, etc.) and synchronize it with the Genesis repository
and re-generate the objects. The Genesis Code Inference Engine
produces code at a rate of 2,000,000+ lines per minute on a Pentium
400 machine. Yes, the number stated is correct. It is not a typo.
This code is highly optimized and only contains methods and events
that are selected by the developer. The code is also tuned for
the specific database specified. All of the code generated by
the Genesis Code Inference Engine is also commented with inferred
text to help developers using the code understand its place in
the application. It should be noted that comment and blank lines
are not included in the lines counts mentioned above.
The intelligent objects use a set of proprietary
objects to execute common functions and lighten code weight. This
architecture stratifies the code into layers that can be updated
as needed or technology changes without breaking the application.
In most applications, the Genesis created code accounts for about
35% to 45% of the total lines of code, the Genesis components
account for about 50% of the code. Between the standard objects
and the Genesis created objects, only 5% to 15% of the code is
hand written. A portion of that is part of standard components
that are customized for the specific application. These standard
components include features that are developer configurable such
as field background colors based on required or optional data
and data validation based on the extended data model with intelligent
error messages. Many of the features are behavior that all developers
would agree should be in all applications, but schedules just
do not allow the inclusion of this type of code complexity. A
specific point is that in VB, the creation and destruction of
objects, especially if they are a structure of nested objects
and collections of objects is very critical to both speed and
stability of the application. Genesis insures that the appropriate
code is in place based on the object model defined and the features
selected. The obvious benefit is that the standard objects are
well tested and the Code Inference Engine based on known patterns
generates the Genesis objects. All of this code is virtually error
free. Debug and test time is dramatically reduced. The behavior
is known and the outcome can be readily predicted. The cycle time
between updates to the data model or functional requirements is
greatly reduced. Genesis produces inferred text to greatly augment
the technical and user documentation process. The AnatomizeIt!
component of Genesis produces technical code documentation of
various types including where used and where called mapping and
A significant feature of the intelligent
objects is multi channel support for persistence. Information
can be read from one source and written to a different source
by the same object. This aids in step 7 of the application development
cycle. Data conversion can be accomplished with relative ease.
The combined effect of these tools is that
more of the discovery is moved toward the front of the development
cycle and can be prototyped, tested and agreed upon by all parties
involved in the development process earlier than is possible with
any other method. Changes can be tried with less impact on the
schedule. Communication and documentation are available throughout
the process, rather than at the end or worse yet, not at all.
It can safely be said that if the project
fits the profile to benefit from these tools, the budget in time
and dollars can be reduced significantly. But the greater benefit
is better tuned, more error free code that is well documented
and executes faster.