I've been thinking about quality lately and thought I'd describe how I find
verification passes a helpful tool. For background, it's axiomatic that the
earlier you find the a bug, the better. "Earlier" can be thought of in a
few ways, and here I'm thinking about when the bug is detected: Compile
time or runtime. It's obviously better to catch bugs at compile time since
you can correct them before shipping. But the 4D compiler/language are
limited in what they can scan for. As an example, if you load up a data
structure at runtime, there is no way for the compiler to scan it. Okay,
you've got an object/blob/arrays/whatever, the compiler says "okay"...but
then there's a flaw in how they're populated. The interpreter catches the
problem and falls down choking. Not a great look. What's needed is a middle
Compiler Verification Runtime
For an example, I describe one of the places that I do this. I have a
somewhat involved automated search system called "SimpleSearch." Well, the
search is simple, but the code behind it is not. The SimpleSearch system is
used on the main table displayed in an output form and performs searches on
that table. So, if you've got a table for [Event] and related [Contact]
records, you might end up with code like so:
SimpleSearch_ClearDefinitions (True) // Force the arrays clear.
SimpleSearch_SetupUI (->[Contact]) // Set the default table, build the
comparators array, setup form defaults.
Simple! What's going on here is that a bunch of data is being coordinated
to define how searches operate. Some of the features shown in the example
* Using a label distinct from a field's name.
* Offering DISTINCT VALUES in a select object instead of a free-form field
search. (Super, super handy.)
* Searching on a 'related' tables (I don't use relation lines much) and
then joining back over to the target table through as many joins as
* Just doing a regular one-field search.
Anyway, there's more to it than that, but even this example shows how
involved the setup details can get. Never mind how the actual searches
work, and never mind how the configuration data is stored other than to
know that it's process in scope. *It makes no difference how it's stored.*
Objects, records, arrays, documents, JSON, XML, some weird format of my
own, it makes no difference to the main problem. Namely, how to validate
this before it runs.
Here's how: I've got a routine that steps through every SimpleSearch setup
routine and does the following:
* Setup the definitions for the table.
* Run a series of inspections on the table to make sure that they follow
all of the rules. For example, you could end up specifying a join field
pair that have incompatible types. (I don't use relation lines much, as
noted.) The search would break, but the compiler would never know. The
verification routine would.
* Report any anomalies or errors with descriptive details.
The reasons this works:
* This particular configuration data is only instantiated at runtime, but
then it's stable. No user action within a process changes the data for that
process. So, once you load the table display up, the configuration data is
set once and that's it.
* You instantiate the object and verify at runtime in the development copy.
* You can get complete coverage as a pre-flight step, before building or
I've got an application build screen that I use with BUILD APPLICATION
where I run this and other pre-flight checks before I build and release.
This sort of thing works just great when you use data structures to control
components of your program. If you have any sort of testing-oriented
development process, the verification code is automatically a good fit.