I received an email from HIMSS the other day inviting me to learn about an exciting new concept called "Edge Computing
". If you Google 'Edge Computing" you can find a host of articles about it many of which are thinly-veiled advertisements for somebody's "edge solution".
Simply put, it appears that the notion of edge computing is that data processing becomes decentralized to the devices that collect it and only processed data is sent over the wide-area network and stored in a data warehouse. The argument for Edge is that it permits the accumulation of the raw processing power of thousands (or millions?) of devices and significantly reduces the bandwidth requirement for networked data processing.
Wait a minute.... it seems to me I have heard this siren-song before!
I got started in informatics back in the early 1980's, when most computers were mainframe devices whose terminals were massive and had just enough innate intelligence to gather data from you and send it back to the mainframe for processing. The terminal could do some crude editing on the input data, and tell you that you had not provided all the information necessary to process it, but not much else.
From there we went to mini-computers, which were "mini me" mainframes aimed at standalone departmental solutions that could contribute pre-processed data to a mainframe.
About the time those became established in healthcare, personal computers arrived on the scene and demonstrated that they could do all the processing at the terminal, limiting the amount of central processing power necessary to run a business. That central processing power resided in "servers" which were personal computers on steroids (in the early days, primarily Unix boxes) whose primary function was to provide centralized data storage and user management.
As the years have moved on, those networks of personal computers (or workstations, as we now call them) gained more and more processing power, as did the servers. Servers began to take on more of the processing burden as it was discovered that maintaining and upgrading individual instances of software on individual workstations was difficult and time-consuming and seriously complicated upgrading of software.
Eventually, we began moving to hosted solutions in which "thin clients" running on a browser use web pages to collect data from users that get sent back to servers for processing. At least one of the major HIS companies delivers their products via thin clients interacting with central servers that host the real data processing applications. Wait a minute... doesn't that sound like a mainframe?
Now we are being asked to embrace "EDGE COMPUTING" which moves the processing back out to the workstations in the field (to the "edges" of the network). The only difference is that those "workstations" are now the "internet of things" and represent a lot of different types of devices with a lot of different data processing capabilities (like your refrigerator, your home automation system, your personal fitness device(s), and so forth).
The concept is so "new" that even the Wikipedia article
describing it contains all sorts of warnings about unresolved and undefined references.
I'm not complaining, I'm just wondering why we keep rolling this wheel down the road. Haven't we been here before?
What do you think?
Dennis A. Tribble, PharmD, FASHP
Ormond Beach, FL
The contents of this blog represent my own opinions, and not necessarily those of ASHP or my employer, BD.