The AI Era: More Abstraction or the Death of Software Engineers?

Wed Sep 11 2024

This article was written following the announcement of OpenAIs o1 model and the standard doomsaying that occurs of the 'death of software engineers' with each jump in publicly accessible LLMs.

Programming has come a long way since its inception, evolving through numerous layers of abstraction, from punch cards to the AI models we use today. But even as the tools and technologies have progressed, the essence of programming (and here I specifically mean programming jobs) have remained constant: solving business problems and delivering value through software. The rise of AI is just another chapter in this journey, another tool that enables us to provide greater value, faster and more efficiently* than ever before.

In the early days of computing, programming involved physically punching holes in cards to represent machine instructions. Each card corresponded to a specific operation, and a stack of punch cards represented an entire program. The process was tedious, error-prone, and highly manual, but it laid the foundation for modern computing.

As computers became more advanced, 'higher-level' languages like Assembly, COBOL, and FORTRAN emerged. No more punch cards. These languages abstracted away the complexity of machine code, allowing programmers to write more readable and maintainable instructions. This marked the beginning of a trend: each new programming language (or tool) abstracted more complexity, allowing developers to focus on solving problems rather than managing low-level details.

The 1990s and early 2000s brought further abstraction with the rise of object-oriented programming (OOP), frameworks, and the introduction of the web. Tools like JavaScript, Python, and Java abstracted away concerns like memory management and operating system dependencies. More recently still, frameworks such as React, Django, and Ruby on Rails allow developers to quickly build applications.

At its core, AI is yet another layer of abstraction. While earlier tools helped automate processes or streamline development workflows, AI helps by abstracting complex decision-making, pattern recognition, and optimisation tasks. With AI, tasks that once required deep domain expertise, such as data analysis, image recognition, or language processing, can be handled by machine learning models trained on massive datasets.

But as with every previous abstraction, the goal remains the same: to deliver value. Whether you’re using punch cards or neural networks, the purpose of programming is to solve real-world problems efficiently. AI might allow us to solve more complex problems (or perhaps AI might allow us to more quickly solve our own variants of wildly solved problems), but it doesn’t change the fact that every piece of software must ultimately deliver value to users and businesses.

Whether it’s automating processes, improving user experiences, or making data-driven decisions, the tools and abstractions we use are just means to that end.

Each layer of abstraction frees developers to think more strategically and focus on what really matters: how the software we build impacts the business, improves processes, and delivers measurable value.

LLMs are not perfect, and are only as powerful as their datasets. Pattern recognition can get them so far, but for novel problems, and for less popular languages this tool is still lacking.

As we embrace AI and future technologies, it’s crucial to keep this perspective. The tools we use will evolve, but our mission as programmers remains the same. We are here to solve problems, create value, and make the world a little better with every line of code we write.