The Separation Model – Part 4
It was 2002, and I remember working with FileMaker 5/6 and the concept of the Data Separation model being discussed at the annual FileMaker Developer conference. I remember thinking, back then, that FileMaker just wasn't the tool to even consider this concept - even though you had multiple files for one database. While an avid FileMaker user/developer, I was rapidly becoming familiar with PHP 4 and knew that true separation was similar to what you found in a PHP setup - a backend (mysql), middleware (php) and a frontend (web browser). FileMaker always had to have those "extra" calculations somewhere in order to make your user interface even halfway decent.
Today, however, it's a completely different situation. In fact, I might go so far as to say the Data Separation model should be your first consideration when starting any new FileMaker solution. The reason is pretty simple - it's even more possible today, than ever before, to keep things clean and organzied. There are functions and methods to keep your data file close to 100% clean of extra fields, calcs, scripts and other "extra" stuff that just comes along with developing in FileMaker.
In this video, I showcase how I address one of my own self-imposed problems and also remind myself of the one custom function which, when used with script triggers, can answer a ton of the issues you have when wanting to display things in the interface. This especially applies when you formerly had to add a lot of that "extra" stuff to make your FileMaker solution work the way you want.
If you're missing that one piece of knowledge keeping you from a clean data file, then this video will certainly have what you need!
Comments
Open question about Data abstraction model
Hi Matt, hi everybody!
Before all, thanks a lot Matt for your KNOWLEDGE SPREADING efforts.
if I am not wrong, this is the second time you embeds your data abstraction model in a presentation.
Even the ATTRIBUTE table is really interesting, this technique has big drawbacks when I need to access its data, because the nature of the field VALUE is not explicit.
I mean, tuple declares its kind, for example TYPE=1 stands for an "address" while TYPE=2 refers to a phone…, but nothing helps me to know the actual nature of fields VALUE. Of course this problematics do not occur when a GROUP_ID is not necessary (i.e. phone or email).
So, in short, data abstraction may lead to tremendous data access and exporting troubles.
From this comes my question: "Is this implementation of Data Abstraction only an academic exercise or may DA having a positive impact on our development strategies?
Thanks to everyone who commented.
/Stefano
There are two sides to every coin
Hey Stefano,
Thanks so much for your reply. First, I would like to say that anything I present in any of my videos does not attempt to be an "end all" solution for all situations for all people. No single solution can ever achieve that. To that end...
ANY database structure (or abstraction) being used, must be evaluated based on its use case. If breaking out a phone or email field is required because the field must stand alone (be it for data typing reasons or query performance - for the purpose of searching or statistical reporting) then that's certainly the approach you should take.
Secondly, one thing we're all good at (people in general) is operating from a vantage point that backs up what we already know. When you mention "big drawbacks", I have to evaluate what you define as a "drawback" and what makes it "big". Is it situational and subjective or general and detrimental? I doubt it's the later, as you'll see why.
If the drawback is not being able to simply search an email field directly, as opposed to spending a small amount of effort to create a custom search routine - or use the built-in QuickSearch feature (which provides its own inverse drawback - e.g. benefit), then I'd have to argue that the abstraction isn't as much of a drawback as it is an inconvenience to your current vantage point.
Don't get me wrong, I'm not trying to say you are wrong and I am right. I'm merely saying a coin always has two sides. Yes, having to specify a group id as part of any search on that data may "seem" like extra work, but I'm not doing that. My use case does not dictate extracting an annual household income breakdown based on area code (in which case the phone number would be in multiple individual fields). My use case simply needs to be able to search for contact information - and this is done with the QuickSearch feature.
In short, the data abstraction is not an academic exercise, it is perfectly viable given the needs of the situation.
If there's one thing I can leave you with it's this. It's the mental approach I take to most situations when structuring a database. Data is like water. It will flow wherever you direct it. If I need to break out the phone number field because the requirements change, a quick search and looping script can likely accommodate the new requirements within a few hours (assuming a lot of data - otherwise possibly a few minutes).
Obviously, if you're working with corporate data, with 5M customers, then your approach to the data structure may be quite different than what is shown in the video. ;)
Keep the questions coming, it's great to have the dialog!
Matt
-- Matt Petrowsky - ISO FileMaker Magazine Editor
Global Variable Refresh
Hey Matt,
You might be able to avoid the script triggers and global fields if you put your CustomList() function into the conditional formating on an object that is not in the portal (or the merge variable itself). If you place the conditional formating on, for example the Addresses rounded box label, the calculation returns the correct value and refreshes immediately...even on Windows (no flashing).
I made one small change to your calc. I just added the merge variables in the first Let() statement to set them back to zero first.
The only additional consideration that I see, in my brief testing, is deleting something from a portal. The display variable doesn't refresh until you click on the layout or commit the record.
Josh Ormond
Thanks for the alternative
Thanks Josh,
I did look at your file and the solution of continuing to use conditional formatting does work well. There are only a few considerations I would make before going this route.
1. How many times is the same code replicated? If you use complex CustomList() code directly on multiple layout objects then you're not taking advantage of DRY - however, this can be solved with a UI custom function (but some developers don't like adding ui specific custom functions to the file - although I don't know why)
2. Putting a lot of solution logic on layout elements, via conditional formatting works great, it's just not that easy to follow. If some other developer came in and looked at the solution, he/she may be able to decipher this from seeing the conditional formatting (provided they have it turned on in Layout mode), but it's much more clear within a script and trigger.
3. Triggers are there to be used and scripts are more obvious. By putting the code in one script and using a trigger, you accomplish much of what is above. First, you're now DRY compliant - the code is in one place. Second, it's easier to document AND debug and third, it's not as "hidden" as conditional formatting. The line between the UI display and the UI logic is already there for you in terms of the Manage Scripts dialog box being its own unique compartment within the logic of the solution.
Of course, I'll probably still use conditional formatting all over the place, but these were the conclusions I came to when I had to re-evaluate my choice in the first place.
I hope that helps out. :)
Matt
-- Matt Petrowsky - ISO FileMaker Magazine Editor
Balancing Act
I agree, it definitely needs to be DRY compliant. Seems like that is easy enough to work out. I am always intrigued by finding ways to refresh variables/globals in a way that doesn't cause seizures for Windows users. So I was focused on that. LOL
I see your point with #2. It's not obvious to look for variables being set through conditional formatting. At the same time, to decipher the current setup (if a developer without previous knowledge of your design standards), I would have to open up at least 4 scripts / 2 fields / 1+ custom functions, to begin to understand how it works.
I would probably resort to using conditional formatting to set variables (at least for display only purpose)...and then use one of your other techniques to include a developer note directly on the layout that only shows in layout mode (stacked as far to the back of the stacking order as I can, so it gets evaluated first). Then a developer only needs to look in layout mode to figure out how it works. I definitely don't think you are wrong...but sometimes it feels that in trying to make something simple and transparent, we could just end up making it more convoluted/complicated (more than is necessary). It's a tough balancing act.
At some point, I may test out using a single object, seen only in layout mode, to set almost all my display variables. I'm not sure that is 100% possible, but I need to see what challenges arise vs the benefits it gives in performance/time/clarity.
Josh Ormond
"I would have to open up at
"I would have to open up at least 4 scripts / 2 fields / 1+ custom functions, to begin to understand how it works."
Or just turn on the debugger. :)
Future videos will reveal that the OnRecordLoad will be used for more than just the one feature, which may justify the use of a script. ;)
Further inspection of your file showed me you were using a $locallyScopedVariable on the layout. I'd never thought of doing that. I always assumed you had to use $$globalVariables as merge variables. This means it does address the multiple window issue. I've been educated!
Also good tip on the layout developer note!
Matt
-- Matt Petrowsky - ISO FileMaker Magazine Editor
I Like Them Both
Script Debugger?! Well, if you want to do it the easy way! LOL
I personally like both approaches. The leaning for me will probably end up being a split at the data/UI level. Utilize the conditional formatting tip for things that are not reliant on stored data...like showing the current user, dynamic text for labels/buttons, etc.
Looking forward to what you have coming up next.
Josh Ormond
summaries and calculations
Hey Matt,
I really like the idea of "keeping the data clean" but does this mean that summary fields and calculated fields are not kept with the data file?
What happens when the client wants to add summaries? ie. sort the patients by treatment (from a related field) and report an average.
I guess the bottom line question is: does derived data (ie a calculation or a summary field) belong in the data file when the primary function of the UI is to report derived data...I was hoping to modify an Interface file whenever the client came up with a new analysis...is this even possible?
Application file startup - Testing for data file availability
Hi Matt,
I've been building solutions with separate app & data files practically from the introduction of FileMaker 7 and haven't looked back since.
If I may, I would like to submit to you an issue I've stumbled upon regarding the separation model.
One of the features I've recently added as a standard feature to all my app files is the ability to test upon startup if the data file ( hosted on a LAN or WAN server ) is up and available, using error capture in the startup script to inform the user that " the system is not available at this time, please try later and if the situation persists, please contact... " You get the picture I presume. This prevents the user from being directed to the file navigation interface of his OS, the native FileMaker way of dealing with such a situation, which is totaly useless and potentially confusing.
Of course, this will work perfectly if the app file opens on a blank layout associated to a local table, just as in your Karate app example.
But it seems that retrofitting such a startup feature to existing app files is not possible. I have app files that even if correctly setup will attempt to connect to the data file before running the startup script, thus failing to correctly test for data availability.
Any clues to what I could be overlooking ?
Thanks