Advertisement

Blog Viewer

... The more things stay the same...

By Dennis Tribble posted 05-21-2019 08:48

  
I received an email from HIMSS the other day inviting me to learn about an exciting new concept called "Edge Computing". If you Google 'Edge Computing" you can find a host of articles about it many of which are thinly-veiled advertisements for somebody's "edge solution".

Simply put, it appears that the notion of edge computing is that data processing becomes decentralized to the devices that collect it and only processed data is sent over the wide-area network and stored in a data warehouse. The argument for Edge is that it permits the accumulation of the raw processing power of thousands (or millions?) of devices and significantly reduces the bandwidth requirement for networked data processing.

Wait a minute.... it seems to me I have heard this siren-song before!

I got started in informatics back in the early 1980's, when most computers were mainframe devices whose terminals were massive and had just enough innate intelligence to gather data from you and send it back to the mainframe for processing. The terminal could do some crude editing on the input data, and tell you that you had not provided all the information necessary to process it, but not much else.

From there we went to mini-computers, which were "mini me" mainframes aimed at standalone departmental solutions that could contribute pre-processed data to a mainframe.

About the time those became established in healthcare, personal computers arrived on the scene and demonstrated that they could do all the processing at the terminal, limiting the amount of central processing power necessary to run a business. That central processing power resided in "servers" which were personal computers on steroids (in the early days, primarily Unix boxes) whose primary function was to provide centralized data storage and user management.

As the years have moved on, those networks of personal computers (or workstations, as we now call them) gained more and more processing power, as did the servers. Servers began to take on more of the processing burden as it was discovered that maintaining and upgrading individual instances of software on individual workstations was difficult and time-consuming and seriously complicated upgrading of software.

Eventually, we began moving to hosted solutions in which "thin clients" running on a browser use web pages to collect data from users that get sent back to servers for processing. At least one of the major HIS companies delivers their products via thin clients interacting with central servers that host the real data processing applications. Wait a minute... doesn't that sound like a mainframe?

Now we are being asked to embrace "EDGE COMPUTING" which moves the processing back out to the workstations in the field (to the "edges" of the network). The only difference is that those "workstations" are now the "internet of things" and represent a lot of different types of devices with a lot of different data processing capabilities (like your refrigerator, your home automation system, your personal fitness device(s), and so forth).

The concept is so "new" that even the Wikipedia article describing it contains all sorts of warnings about unresolved and undefined references.

I'm not complaining, I'm just wondering why we keep rolling this wheel down the road. Haven't we been here before? 

What do you think?

Dennis A. Tribble, PharmD, FASHP
Ormond Beach, FL
DATdoc@aol.com

The contents of this blog represent my own opinions, and not necessarily those of ASHP or my employer, BD.
2 comments
18 views

Permalink

Comments

05-21-2019 12:10

Exactly the kind of commentary I had hoped to elicit. None of these "new" technologies is a silver bullet.

05-21-2019 11:32

This is absolutely "Deja vu all over again."

I spent nearly 20 years - beginning in 1978 - at Digital Equipment Corporation - the creators of the minicomputer - making the life of mainframe computer makers miserable! ;)  Back then the concept of "Distributed computing" was both new and more disruptive than anyone imagined. While there, I got to run our desktop systems business as we tried - with very limited success - to keep the PC from doing to us what we did to mainframes. In healthcare, about the only clearly recognizable technology from that era is MUMPS (which formed some of the technology foundation for EPIC.)

As to "why we keep doing this to ourselves?" - the answer is pretty simple.  "Money talks."

It's incredibly cheap to deploy an app on a powerful dedicated computer into the hands of a user -- IF that computer is a smart phone.

If one were to put that same app on a "cloud" computer array, the cost of that app rises exponentially - and the app now consumes network bandwidth, IT staff time, etc.  There is no doubt that the "system" will be more professionally managed and likely better protected is it's in a well-managed and secured cloud system.   

This was the identical argument that was fought between minicomputers and mainframes,  between PC users and IT, between PC LANs and mainframes, etc. etc.

Hopefully all of those debates taught us some things:

  1. New distributed devices are not "born" ready to be reliable, secure, or trustworthy.  No surprise that we seeing issues with "the Internet of Things" every day.
  2. Forcing each user to become their own full-fledged system manager is a guaranteed path to inconsistency.
  3. Do not assume your IT organization can just absorb a new generation of distributed systems/apps. Or that they are wrong when they say no!
  4. End Users will always want to deploy new stuff faster than anyone else could call "comfortable."
  5. Expect "Standards Wars."  Way back, IBM and DEC fought about "SNA vs. DecNet" (both were beaten by TCP/IP.)  Amazon, Apple, and Google are warring over the ecosystem for personal assistants. These WILL impact all IOT devices in one way or another. Telecom is warring about 5G and promising that "5G will change everything."

Best

Dennis