It's been said that, often to our disbelief, we eventually turn into our parents. As different as I am from my Father, I often see some unexpected truth in that idea.
Why my Father was young he had visions of flying planes in the Air Force, something that wasn't meant to be. He ended up channeling creative energy into building highly realistic airplane models and became quite good at it. It was a passionate hobby for him. When I was young I also had a fascination with the idea of flying planes, specifically massive complex planes like the biggest jetliners and super-sonic transports. I remember having a huge poster of a 747 flight control panel on my bedroom wall and learning what each control did. I was fascinated by such a complex system being so powerful and meaningful and the idea of controlling it seemed magical.
Today I don't fly planes (well, except for simulations), but it occurs to me that I have the same obsession with models and complex systems, but models of a different kind. Growing up around plastic airplanes I had a bit of a mental block about 'model' referring to anything other than that and didn't really get it when I read about computer programs or mathematical equations being referred to as models. That was until I read a book about simulations and it explained it in a simple way that clicked. It described a model being something that is sufficiently similar to some other thing, sharing enough key attributes, that one can make observations about the model's behavior or poke and prod it to see what it does, and some of the results will be meaningful in drawing conclusions or predicting the behavior of the modeled thing. You can increase the weight of a model boat and see it sink, just like a real boat would, and if you have an equation that describes the physics of that, you can play with the variables in the equation to see the same result.
I know this sounds really obvious and simple, but once I thought about I started noticing models everywhere. When I noticed that one system (Thing1) shared some attributes with another different (Thing2), I started asking myself "What things happen with Thing1 that might also happen with Thing2?" In other words, what does Thing1 have to teach me about Thing2. That is a simple, but powerful idea. Maybe you could argue that it is a key part of human intelligence that we are capable of abstract comparisons that lead us to form novel conclusions.
The other day I was trying to explain a nuance of our service tier technology (the way that custom bindings are built with Microsoft's WCF) to a non-developer coworker, and my message wasn't making it a across. He was drawing the wrong conclusions. It occurred to me that choosing and assembling pieces for a WCF binding had some things in common with ordering menu items at McDonalds, picking and choosing items (custom binding) vs ordering a numbered menu item (out of the box binding). It isn't a perfect model, but it didn't have to be. Once I made some comparisons to that the concepts were now accessible and added clarity, not just for him but for less technical people who were listening. I think you can poke and prod the McDonalds menu model to see how it could break down, and you can easily see how Microsoft may have went wrong. I'll leave you to do that mental exercise but imagine just a few numbered menu items, but many things you have to choose just right or end up with something unpalatable.
In Jurgen Appelo's Management 3.0 he talks about breakthroughs that resulted from knowledge finally being shared between different disciplines (economists, biologists, mathematicians, etc) where independent models had been developed to model real world systems, but the models were sufficiently powerful to explain systems completely out of the originating domain. The systems themselves that are being modeled can be used as models for other systems. Think network routing protocols based on the behavior of ants and pheromone trails or approaches in machine learning like neural networks based on ideas about how the human brain stores knowledge.
I recently shifted from a role of writing software to a scrum-master role that involves writing a lot more emails and a lot less code. It calls for a different focus, on processes, people, and interactions. Fortunately for me it turns out that the model of writing good code has a lot to teach me about good practices in a non-coding role. The principles of coupling and cohesion, simple components, refactoring, eliminating redundancy can be as applicable at an enterprise level, describing how an organization can be structured to work well, as they are in describing good coding practices. It's not a perfect model (no model is), but sometimes thinking about an organization problem by finding a software coding analog and mapping back the solution in that domain yields new insights.
If you open your mind to noticing similarities between seemingly different things, viewing one as a metaphor or a model for another, you might find the model has things to teach you. When I observe something that surprises me or seems counter-intuitive, I try to ask the question, does this have relevance in other domains? I have more examples but I think you get the gist so I'll end it here for now.
What metaphors, analogies, or models have you encountered that taught you something?