I listened to this episode of The New Stack Makers podcast which was a quick fireside’esque chat between Nate Taggart of Stackery and Kelsey Hightower of Google. A couple of choice quotes that really jumped at me, and gave me kind of a chuckle were the following:
Taggart: “I think serverless is high in the hype-cycle right now. We’ve reached peak mania over the potential. It’s always a panacea. This technology feels new when in reality it’s probably just a new version of the same old thing we do. There’s this cycle we go through of abstract away, then take back control,”
And even more poignant to consider:
Hightower: “I like to watch the old IBM commercials that talk about mainframe - and I’m a fan… to me that was the ultimate serverless experience. … You’ve got some data? You load it up, click click click, done. No monitoring, Nagios, tracing, Prometheus, cloud native, nothing. Data-in, data-out.”
I chuckled because I was reminded of my short experience of working in a shop with a mainframe, but never considered its analogue to our current trend towards serverless now.
At my very first job directly after graduating, I took a job with the IT department of my alma mater where effectively the entirety of mission critical data that the University depended on was hosted on VAX based mainframe with VMS as the OS that was running a higher education specific ERP system. Yes, there was a server - it was a hefty clunker of a box sitting off in the corner of the server room connected to tape machine and a tape archive. But its footprint and associated labor for administration paled in comparison to all of the single purpose servers being bootstrapped and racked almost on a monthly basis. The mainframe was solid as rock in terms of uptime, and effectively had no monitoring compared to the rest of the infrastructure. The application developers who wrote code for it literally just whipped up some Cobol, submitted it to the scheduler, and out came reports and/or changes to the data. That was literally it. The hardware or the OS were hardly a passing concern.
Of course, the system was problematic for other reasons. The particular ERP required using Cobol, which couldn’t be realistically hired for any longer, and the world was moving away from it. We had about two people in the organization that were comfortable admin’ing VMS. And… a single piece of hardware was a single point of failure of a mission critical system, after all.
But take a moment to consider the experience those developers had when writing production code (fsvo “production”) and contrast that to the myriad of tools and infrastructure we as developers today are required to manipulate, even if it is declarative in the form of Infrastructure as Code. So with “serverless”, whatever serverless ultimately means to us in a few years, we’re coming back around full circle to where your average developer, in order to meet business requirements, may yet again have an environment way they just concern themselves with manipulating data via responding to events.
So, even if you missed on current wave or trend that’s riding its way up the hype-cycle, don’t worry, the mainframe folks missed out too.
Note: I think Kelsey’s quote can be read as hand waving away the concept of monitoring. We’ll still have to monitor application level metrics and throughput metrics, of course, but that monitoring won’t include things lower on the stack. Hardware utilization level metrics likely is not going matter to the developer when you don’t even have a server, or a container running in a server.