- AI CODING CLUB
- Posts
- Overcoming Common AI-Assisted Coding Challenges
Overcoming Common AI-Assisted Coding Challenges
While AI-assisted coding offers incredible possibilities, it's fair to say it's not always a walk in the park.
I hope you're doing well, enjoying your summer vacation in the northern hemisphere or a nice winter holiday in the southern part of this pale blue dot.
A Pale Blue Dot, photographed by Voyager 1 in 1990, 6 billion km away from Earth.
As amazing as AI-assisted coding can be, let's be honest—it’s not always a bed of roses.
While these tools can significantly boost productivity and streamline complex tasks, they also come with their fair share of challenges.
From unexpected bugs to quirky behaviour, navigating the world of AI-assisted coding can sometimes feel like an adventure with a few bumps along the way.
In this email, I'll share 3 of the common obstacles I’ve encountered so far and offer practical tips on how to bypass them.
Let's dive in and make your AI-assisted coding journey as smooth as possible!
Trapped in a looping bug
By far the most common issue you’ll face is to be trapped in a nasty looping bug.
The AI will suggest some code which doesn’t work. You’ll then ask the AI to fix the bug and the agent will introduce another bug.
Naturally, you’ll ask the AI to fix this new bug and it will revert to the first code iteration, reintroducing the first bug, etc.
It’s even more infuriating when ChatGPT repeatedly acknowledges that “this can be frustrating…”. It is, indeed.
One of the best ways I’ve found to escape this looping mayhem is to switch LLM.
I would for instance feed the code suggested by ChatGPT into Mistral Codestral or into Claude Sonnet 3.5 (via Cody in VS Code).
In most cases, this can break the loop. In some cases, it will introduce new bugs.
Another strategy is to start a brand new conversation with your favourite LLM and articulate your brief in a slightly different way.
A different input can result in a different - hopefully bug-free - output.
Use multiple LLMs to break a loop of bugs or re-articulate your brief in a brand new conversation.
Design Misunderstandings
You have probably a clear image in your head on how you see the positioning of the elements on your pages.
But if you brief ChatGPT to create the page solely based on a textual description of the structure, you will usually be disappointed with the output.
These days, the best way to get as close as possible to the expected design is too feed ChatGPT 4o with a screenshot of the section of the page you want to create.
Don’t feed the whole page in a one shot, proceed gradually, while using the first output, when fine tuned, as context for further sections (to get a consistent rendering).
Ask ChatGPT to use Tailwind CSS (via CDN when in development mode).
It’s the easiest way to refine your design.
You can get inspiration from Tailwind CSS Showcase.
If you’re looking for something simple to start with, you can also use readymade sections from Preline: https://preline.co/examples.html
It’s 100% Free. Simply copy-paste the HTML code.
Don’t forget to wrap the code into a HTML structure incl. a link to Tailwind in the HEAD section.
<!doctype html>
<html>
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<script src="https://cdn.tailwindcss.com"></script>
</head>
<body>
<h1 class="text-3xl font-bold underline">
Hello world!
</h1>
</body>
</html>
Here’s an example of a premade Hero Section.
You can find a live demo in my Codepen.
By the way, circling back to my previous advice, I did a quick experiment, feeding ChatGPT 4o with the screenshot above, asking it to give me the HTML code to reproduce the hero section, using Tailwind CSS via the CDN.
I included some additional requirements, to get as close as possible to the expected output.
I must say that the result was VERY close to the readymade HTML available on Preline.
Here’s what it looks like when executing the code (there’s obviously a placeholder for the image, not available in the generic code).
You may have noticed the difference in terms of font weight for the main title, but that’s easy to tweak with Tailwind CSS.
ChatGPT gave us for the title
<h1 class="text-4xl font-extrabold text-gray-900 mb-4">Start your journey with <span class="text-blue-600">Preline</span></h1>
I then tweaked it to
Start your journey with Preline
Bigger text (6xl), less bold, with some line height, but not too much (leading-tight). Here’s the rendering, pretty close to Preline’s readymade code.
FYI, the screenshot below illustrates what would have been the raw result with no screenshot reference and no mention of Tailwind in the brief (just feeding the text content and asking for a “HTML landing page in 2 columns with the text on the left and an image on the right”). Very 1990s 😉
If you have some reference material and learn how to play with Tailwind CSS, you can produce very well designed pages in a few minutes.
Click & Touch Events
By default, when it comes to frontend code, LLMs will suggest desktop-first solutions, for clicks, not taps.
This can be an issue for some interactions which require a touch-friendly approach, such as drag & drop of some elements on the canvas.
Let’s first prompt ChatGPT without any mobile-related request.
Here’s the code we get:
As you can see, it’s all about mouse movements:
onMouseDown when you grab the rectangle, onMouseMove when you drag it, onMouseUp when you drop it, onMouseOut if your mouse is leaving the boundaries of the element '(both Up and Out stop the dragging process).
You can test the code in action on Codepen, right here.
It “kind of works” from time to time on mobile but it’s definitely not optimized for touch events.
See how it’s stuck (on mobile) when you try to drag the blue rectangle to the right for instance.
So let’s ask ChatGPT to make this code “mobile-friendly for both click and touch events”.
Here’s the suggestion.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Canvas Drag & Drop</title>
<style>
canvas {
border: 1px solid black;
}
</style>
</head>
<body>
<canvas id="myCanvas" width="800" height="600"></canvas>
<script>
const canvas = document.getElementById('myCanvas');
const ctx = canvas.getContext('2d');
let rect = {
x: 50,
y: 50,
width: 100,
height: 100,
isDragging: false,
};
let offsetX, offsetY;
// Mouse events
canvas.addEventListener('mousedown', onMouseDown);
canvas.addEventListener('mousemove', onMouseMove);
canvas.addEventListener('mouseup', onMouseUp);
canvas.addEventListener('mouseout', onMouseOut);
// Touch events
canvas.addEventListener('touchstart', onTouchStart, { passive: false });
canvas.addEventListener('touchmove', onTouchMove, { passive: false });
canvas.addEventListener('touchend', onTouchEnd);
canvas.addEventListener('touchcancel', onTouchCancel);
function drawRect() {
ctx.clearRect(0, 0, canvas.width, canvas.height);
ctx.fillStyle = 'green';
ctx.fillRect(rect.x, rect.y, rect.width, rect.height);
}
function onMouseDown(e) {
const mousePos = getMousePos(e);
if (isMouseInRect(mousePos)) {
rect.isDragging = true;
offsetX = mousePos.x - rect.x;
offsetY = mousePos.y - rect.y;
}
}
function onMouseMove(e) {
if (rect.isDragging) {
const mousePos = getMousePos(e);
rect.x = mousePos.x - offsetX;
rect.y = mousePos.y - offsetY;
drawRect();
}
}
function onMouseUp() {
rect.isDragging = false;
}
function onMouseOut() {
rect.isDragging = false;
}
function onTouchStart(e) {
e.preventDefault();
const touchPos = getTouchPos(e);
if (isMouseInRect(touchPos)) {
rect.isDragging = true;
offsetX = touchPos.x - rect.x;
offsetY = touchPos.y - rect.y;
}
}
function onTouchMove(e) {
e.preventDefault();
if (rect.isDragging) {
const touchPos = getTouchPos(e);
rect.x = touchPos.x - offsetX;
rect.y = touchPos.y - offsetY;
drawRect();
}
}
function onTouchEnd() {
rect.isDragging = false;
}
function onTouchCancel() {
rect.isDragging = false;
}
function getMousePos(e) {
const rect = canvas.getBoundingClientRect();
return {
x: e.clientX - rect.left,
y: e.clientY - rect.top,
};
}
function getTouchPos(e) {
const rect = canvas.getBoundingClientRect();
const touch = e.touches[0];
return {
x: touch.clientX - rect.left,
y: touch.clientY - rect.top,
};
}
function isMouseInRect(mousePos) {
return (
mousePos.x > rect.x &&
mousePos.x < rect.x + rect.width &&
mousePos.y > rect.y &&
mousePos.y < rect.y + rect.height
);
}
drawRect();
</script>
</body>
</html>
ChatGPT has added event listeners for touchstart
, touchmove
, touchend
, and touchcancel
. These handle the touch interactions.
You can test on mobile this touch-friendly version via another Codepen, right here (green rectangle in the mobile-friendly version vs blue rectangle in the desktop-first iteration, if you want to compare).
If your code requires to be touch-friendly, it’s worth including this requirement in your initial brief.
That’s a wrap!
I hope you liked the format of this new edition of the AI Coding Club newsletter.
Don’t hesitate to send me an email if you have specific questions.
My Latest AI-coded App
I invite you to watch a video where I present a brand new feature I shipped for the AI Jingle Maker after 4 days and 2 nights of intense AI-assisted coding: the Audio Canvas.
One-on-One coaching available
If you’d like to get a private introduction to the art of AI-assisted coding and more broadly a detailed overview of today’s Gen AI capabilities, I’m offering one-on-one 2-hour mentoring sessions “How To Talk To An AI Agent”.
Sessions are tailored to your specific business needs.
I can also assist you in the development of your own micro SaaS project.