Looking Back to the Road Ahead

As 2022 comes to a close, I am somewhat reflective, but, mostly looking ahead. This past year was certainly tumultuous on several fronts. Coming more solidly out of Covid protocols, kids firmly back in school, life contemplated, and perhaps most impactful personally, leaving Microsoft after 24 years of service.

What did I do in my first days after leaving the company? Start coding of course! I started coding when I was 12 years old on a Commodore PET. To say ‘coding is in my blood’, would be an understatement. I have been coding longer than I’ve been able to hold coherent conversations with adults, that’s how long I’ve been coding, and I don’t see stopping any time soon.

I’ve always thought that coding is story telling. You’re telling a story, converting some sort of desire into language the computer can understand and execute. The computer, for its part, is super simplistic, with a limited vocabulary. Just think about it. How much work would you have to put into telling someone the directions to your house if you could only communicate in numbers, arithmetic, and simple logic ‘if’, ‘compare’, ‘then’. You don’t have the higher order stuff like “get on the highway, head south”. You have to go all the way back to first principles, and somehow encode the ‘highway’, and ‘head south’. And that’s why we’ve had programming languages as long as we’ve had computers, and it’s also why we’ll continue to develop more programming languages, because, this stuff is just too hard.

My recent weeks have been filled with various realizations related to the state of computing. When you have the leisure to take a step back, and just observe the state of the art in computing these days, you can gain both an appreciation, and a feeling of being overwhelmed at the same time.

It wasn’t too long ago that the company Adapteva: http://www.adapteva.com was pioneering, and pushing on a CPU architecture that had 64 64-bit RISC processors in a single package. That was the parallela computer. The experimental board was roughly raspberry pi sized, and packed quite a punch. The company did not survive, but now 64 cores is not outrageous, at least for data center class machines and workstations.

Meanwhile, nVidia, and AMD, and intel, have been pushing the compute cores in graphics processors into the hundreds and thousands. At this point, the GPU is the new CPU, with the CPU being relegated to mundane operating systems tasks such as managing memory and interacting with peripherals. Most of the computation of consequence is happening on the GPU now. And, accordingly, the GPU now commands the lion’s share of the PC price. This makes sense, as the CPU has become a commodity part, with the AMD/intel wars at a point of equilibrium. No longer can they win by juicing clock rates, now it’s all about cores, and they just keep leepfrogging. nVidia is not standing still, and will be dipping their toe into the general computing (as it relates to data centers at least) market in due time.

nVidia, long a critical piece of the High Performance Computing (HPC) scene, is pushing further down the stack. They’re driving a new three letter acronym Data Processing Unit (DPU). With a nod to madernity, and decades of experience in the HPC realm, the DPU promises to be a modern replacement for a lot of the disparate discreet pieces of computing found in and around data centers.

nVidia isn’t slouching on graphics though. Aside from their hardware, they continue to make strides in the realm of graphics algorithms. NeuralVDB is one of those areas of innovation. Improving the ability to render things like water, fire, smoke and clouds, it’s about the algorithm, and not about the hardware. Bottom line, better looking simulations, in less time, while requiring less energy. This is a great direction to go.

But this is just the graphics area related to nVidia. There has been an explosion of algorithms in the “AI” space as well. While the headliner might be OpenAI and their various efforts, such as Dall*E, which can generate any image you can imagine, there are other efforts as well. The OpenAI Whisper project is all about achieving even better voice to text translation (English primarily).

Not to be left in the dark, Google, Microsoft, Meta, even IBM and myriad researchers in companies, universities, and private labs, are all driving hard on several fronts to evolve myriad technologies. This is the ‘overwhelm’ part. One thing is sure, the pace of change is accelerating. We don’t even have to wait for the advent of ‘quantum computing’, the future is now.

The opportunities in all this are tremendous, but it takes a different perspective than we’ve had in the past to ride the waves of innovation. There will be no single winner here, at least not yet. The various algorithms and frameworks that are emerging are real game changers. Dall*E, and the like, and making it possible for everyday individuals to come up with reasonable artwork, for example. This could be a threat to those who make their living in the creative arts, or it could be a tremendous new tool to add to their arsenal. More imagination and tweaking are required to make truly brilliant art compared to the standard fair individuals such as myself might come up with.

One thing that has emerged from all this, and the thing that really gets me thinking is, conversational computing might start to emerge now. What I mean by that; Dall*E, and others, work off of prompts you type in: “A teddy bear washing dishes”. You don’t write C or JavaScript, or Renderman, you just type plain english, and the computer turns that into the image you seek. Well, what if we take that further. “Show this picture to my brother”. An always listening system, that has observed things like “brother”, and knows the context of the picture I’m talking about, and has learned myriad ways to send something to my brother, will figure out what to do, without much prompting from me. In the case of ambiguity, it will ask me questions, and I can provide further guidance.

This is going far beyond “hay Siri”, which is very limited to specific tasks. This is the confluence of AI, digital assistant, digital me, visualization and conversational computing.

When I look back over the roughly 40 years of computing that I’ve been engaged in, I see the evolution of computers from the first PC and hobbyist machines, to the super computers we all carry around in our pockets in the form of cell phones. Computing is becoming ubiquitous, as it is woven into the very fabric of our existence. Our programming is evolving, and is reaching a breaking point where we’ll stop using specialized languages to ‘program’ the machine, and we’ll begin to have conversations with them instead, voicing our desires and intents rather than giving explicit instructions.

It’s been a tremendous year, with many changes. I am glad I’ve had the opportunity to lift my head up, have a look around, and dive in a new direction leveraging all the magic that is being created.

Previous
Previous

Creating A SVG Viewer from the ground up

Next
Next

Hello Scene — Conclusion