How Coders Can Survive—and Thrive—in a ChatGPT World

4 tips for programmers to stay ahead of generative AI

5 min read

Rina Diane Caballar is a Contributing Editor covering tech and its intersections with science, society, and the environment.

hands typing on a white computer keyboard, computer code on top of image

Artificial intelligence, particularly generative AI powered by large language models (LLMs), could upend many coders’ livelihoods. But some experts argue that AI won’t replace human programmers—not immediately, at least.

“You will have to worry about people who are using AI replacing you,” says Tanishq Mathew Abraham, a recent Ph.D. in biomedical engineering at the University of California, Davis and the CEO of medical AI research center MedARC.

So how can software developers make themselves more useful and relevant in what appears to be a coming age of LLM-centered coding? Here are some tips and techniques for coders to survive and thrive in a generative AI world.

Stick to Basics and Best Practices

While the myriad AI-based coding assistants could help with code completion and code generation, the fundamentals of programming remain: the ability to read and reason about your own and others’ code, and understanding how the code you write fits into a larger system.

“I believe AI can dramatically increase the productivity of software developers, but there is a lot more to software engineering than just generating code—from eliciting user requirements to debugging, testing, and more,” says Priyan Vaithilingam, a Ph.D. student working in the intersection of human-computer interaction and programming languages at Harvard University’s John A. Paulson School of Engineering and Applied Sciences.

One of the most integral programming skills continues to be the domain of human coders: problem solving. Analyzing a problem and finding an elegant solution for it is still a highly regarded coding expertise.

“There’s a creative aspect to it, and a lot of those skills of approaching a problem are more important than the actual language or tools,” says Ines Montani, a Fellow of the Python Software Foundation and cofounder and CEO of Explosion, a software company specializing in developer tools for AI and natural-language processing. “Don’t fall into the trap of comparing yourself to the AI, which is more or less a statistical output of a large model. There are differences in what a developer does versus what the model outputs—there’s more to being a developer than just writing arbitrary lines of code.”

Additionally, good software-engineering practices are proving even more valuable than before. These practices include planning out the system design and software architecture, which serves as a good context for AI-based tools to more effectively predict what code you need next.

“A human coder is still the one who has to figure out the structure of a piece of code, the right abstractions around which to organize it, and the requirements for different interfaces,” says Armando Solar-Lezama, an associate director and chief operating officer of MIT’s Computer Science and Artificial Intelligence Laboratory, and who leads the lab’s computer-aided programming group. “All of those are central to software-engineering practice, and they’re not going to go away soon.”

Find the Tool That Fits Your Needs

Finding the right AI-based tool is essential. Each tool has its own ways to interact with it, and there are different ways to incorporate each tool into your development workflow—whether that’s automating the creation of unit tests, generating test data, or writing documentation.

GitHub Copilot and other AI coding assistants, for instance, can augment programming, offering suggestions as you code. ChatGPT and Google’s Bard, on the other hand, act more like conversational AI programmers and can be used to answer questions about APIs (application programming interfaces) or generate code snippets.

The trick is to experiment. Play around with the AI tool, get a feel for how it works, consider the quality of its outputs—but keep an open mind for other tools. “AI is such a fast-moving field. You don’t want to just settle on a tool and then use that for the rest of your life, so you’ll need to adapt quickly to new ones,” Abraham says.

Think about appropriate use cases as well. Generative AI tools can provide a swift route to learning new programming languages or frameworks, and they can also be a quicker way to kick off small projects and create prototypes.

Clear and Precise Conversations Are Crucial

When using AI coding assistants, be detailed about what you need and view it as an iterative process. Abraham proposes writing a comment that explains the code you want so the assistant can generate relevant suggestions that meet your requirements.

For conversational AI programmers, you’ll need to know the best way to frame your prompts. This is where prompt engineering comes in.

One approach Abraham suggests is chain-of-thought prompting. This involves a divide-and-conquer strategy where you break down a problem into multiple steps and tackle each one to solve the entire problem. “Asking the model to do too much at a given time can lead to disaster. You want it to be able to work with manageable chunks of information and produce manageable chunks of code,” he says.

Instead of asking an AI programmer to code an entire program from scratch, for example, consider the different tasks the program is trying to accomplish. Divide those tasks further and ask the model to write specific functions for each. You might need to reason with the model about the steps it needs to take to achieve a task, resulting in a back-and-forth conversation.

“Treat it almost like a smart intern who knows a lot about a subject but isn’t that experienced,” Abraham says.

Precision and clarity are vital with prompt engineering. “You need to ask the model very clearly what you want, be very precise about what you’re asking it to do, and make sure you’re following up,” Abraham says.

It can also be valuable to learn the basic concepts of artificial intelligence and machine learning, as well as get a sense of how large language models work and their strengths and weaknesses. You don’t need to dive deep, but having some general knowledge can give you important context about the results.

To help you get started, Abraham recommends the OpenAI Cookbook, which has sections on prompting libraries and tools, prompting guides, and video courses, while Vaithilingam suggests reading up on the Illustrated Transformer to find out more about models and machine-learning basics.

Be Critical and Understand the Risks

Software engineers should be critical of the outputs of large language models, as they tend to hallucinate and produce inaccurate or incorrect code. “It’s easy to get stuck in a debugging rabbit hole when blindly using AI-generated code, and subtle bugs can be difficult to spot,” Vaithilingam says.

That’s why checking generated code is crucial, though it adds an extra step, which might harm more than help productivity. But Abraham argues that “it’s easier to verify the code than it is to write it from scratch in some cases, and it’s a faster approach to generate and then verify before incorporating into whatever codebase you have.”

It might be worth putting the outputs of these models into perspective, asking the following questions: What data was this model trained on? What was filtered out and not included in that data? How old is the training data, and what version of a programming language, software package, or library was the model trained on? The answers to these questions could impact the results and provide more context about them.

Developers should also be wary of entering proprietary code into these models. Some companies, such as Tabnine, offer enterprise versions of their AI coding assistants, providing privacy while still learning an organization’s coding patterns and style.

Copyright is another factor to consider, though it’s less of a worry if you’re using these tools to complete a few lines of code or generate code for common or trivial tasks compared to producing bigger chunks of code.

“Programmers should have some sense of how original what they’re trying to do is and to what extent is it unique to their context,” Solar-Lezama says. “If the model is producing a somewhat original piece of code, it’s important to be suspicious and skeptical before putting that in a production codebase.”

An even larger issue is security, as these models may generate code containing vulnerabilities. According to Vaithilingam, software-development best practices such as code reviews and strong test pipelines can help safeguard against this risk.

“One of the things that more experienced software engineers bring to the table is the awareness of the most common vulnerabilities in code and the most common ways in which code can be made vulnerable,” says Solar-Lezama. “They build this intuition about what to pay attention to and what raises red flags. Moving forward, these kinds of techniques are going to become more important parts of the software engineering mix.”

For programmers to survive in a generative AI world, they’ll need to embrace AI as a tool and incorporate AI into their workflow while recognizing the opportunities and limitations of these tools—and still relying on their human coding capabilities to thrive.

The Conversation (1)
Vaibhav Sunder
Vaibhav Sunder03 Aug, 2023

While the tools are important as a human, intent rules over. And so while this a correct approach to using AI, Gemini and AutoGPT solve the need for small chunks and intern thinking about getting codes.