the singularly human activity of obtaining knowledge
While we’re on the topic of the kids in my neighbourhood and their future transformations… I often wonder about what kind of future we’re leaving for them.
To say there’s a lot of uncertainty in the tech industry these days is an understatement. Fortunately, this is nothing new. Technology has always caused upheaval. When it starts to feel like the pace of this change is too much, it helps to remember why we have technology in the first place. Quite simply, we need technology to learn about ourselves and the world around us. As a bonus, there’s always the hope that what we learn will improve life not only for some individuals but humanity as a whole. This may be a tall order, but without it there’s not much point in building all of the technological stuff upon which we’ve become so dependent.
One thing that hasn’t changed over the course of our collective forays into technology, even when we’re moving at break-neck speed, is the scientific method we use to obtain knowledge. Yes, Artificial Intelligence is taking over the tech industry. (If you still have any doubts about that, here’s a little article from Jennifer Turliuk and the infamous ChatGPT to help convince you.)
At the same time, a simulation is only ever as accurate as the real-world data on which it’s based. These days, we’ve got more sources of real-world data than ever before. We certainly need AI to help us gather and interpret all of the data we’re capable of producing. The one thing that AI will never be able to help us with is setting the goal for all that data collection in the first place. Determining the overall goal for obtaining further knowledge is a singularly human activity. Only humans can obtain knowledge, and we are the only ones who can figure out what we ought to do with it.
When it comes to simulating real-world conditions, the technology we use and the models we test have changed vastly over the past century. Thankfully, through it all the way we employ the scientific method has remained roughly the same. That is: observing something new that doesn’t match our current body of knowledge, making a new hypothesis, testing it out, and drawing a conclusion.
In the space of 50 years, we went from human computers to supercomputers. These days, you don’t need a supercomputer to model sophisticated simulations. Your own personal computer will suit the task nicely. For example, if you wanted to model a 1-D wave, you can do so using C or Python or some mixture of the two, like these folks from Cornell. If you’re looking for something a little more 3-D, you might consider these wind tunnel simulation programs written in Java (recommended by NASA!)
The fact that you can now run a wind tunnel simulation on your own computer is entirely due to the history of America’s National Aeronautics and Space Administration agency. They've been in the wind tunnel business from the very beginning. Hidden Figures, by Margot Lee Shetterley, chronicles the agency’s journey from testing aircraft prototypes in 1943 to their moon landing in July of 1969. Shetterley captures this distinct time in American history with an elegant accuracy, not unlike a mathematical expression itself. Here’s her description of the wind tunnels built during that era, and how they “offered many of the research benefits of flight tests but without the danger.”
Engineers blasted air over planes, sometimes full-sized vehicles or fractional-scale models, even disembodied wings or fuselages, closely observing how the air flowed around the object in order to extrapolate how the object would fly through air.
She observes that just the names of the wind tunnels they constructed, inspire us
“to imagine the combination of pressure, velocity, and dimension that resided therein.”
Variable-Density,
Free-Flight,
Two-Foot Smoke-Flow
All the research that went into creating and utilising these labs served as the foundation for America’s race into space. As Shetterley says, “It would take a total of 1.2 million tests, simulations, inivestigations, inspections, verifications, corroberations, experiments, checkouts, and dry runs just to send the first American into space… ” Over half a century later, we continue to reap the rewards of this work.
Thanks to such a solid foundation, around the world we’re able to build bigger and better models for real-world conditions. Our research concerns may have changed, but the basic methodology we use to test our assumptions hasn’t. Take a complex subject like climate change, where our predictions can have a tremendous impact on housing and agriculture. We have existing models to predict how waves and wind function separately, but these may prove insufficient when attempting to determine the interaction of wind and waves together in a particular ocean. For that we need to test our hypotheses out in an environment we can measure. These measurements help us determine whether or not our modelling is correct.
Don’t just take my word for it. As Professor Jason Monty, a University of Melbourne expert in fluid mechanics will tell you, “Essential processes happen thanks to activity in the ‘atmospheric boundary layer’ between the ocean and the atmosphere, where waves and wind meet.”
Because obtaining direct measurements from the ocean itself isn’t always feasible, he and his team are building a physical simulation of these conditions. In this way the particular conditions of a given ocean can be induced and measured, to better inform models of climate change.
Our biggest problem with technology has never really been all the upheaval it causes but rather what we do with the knowledge it helps us to obtain.