Note: This article is a work in progress, there may be a few gremlins within the content and it may be incomplete and missing content. It may also be updated over time.

AI upskilling moral predicament

I've always been a little skeptical about using AI in my development process, it's not something I was early onboard with. Seeing examples of poor, non-semantic and accessible code scared me off a little. However, recently when looking for a new role, I've been using it more as a way of upskilling and filling knowledge gaps. It's left me with a few random thoughts on it. Writing about these here, to look back on at the end of the year to see how my approach and view might change.

How am I using AI currently?

Day-to-day I currently use AI in the following ways, that I'm aware of, as there's is probably a lot of secondary consumptions of it I'm unaware of (translations etc)

  • Google search summary
  • Github copilot code autocompletion/suggestions/chat
  • ChatGPT 'pairing', feedback and suggestions (using AI chat via responses to prompts)

Note permalink

These are only my views on AI usage. I absolutely see the value in it and everybody is free to make their own choices and decisions on suitable usage.

Freelance benefits

I've noticed that now I'm out of work, my usage has increased, especially with regards to ChatGPT. When working within a product/discipline team at previous roles, my first reaction has always been to reach out to team members for help. Whether this be a pairing session, a huddle or just a Slack chat.

With none of this available and doing some bits and bobs freelance, I've found ChatGPT as a useful 'virtual' pairing partner, and it's made me realise how it can be useful as a starting point even with colleagues. For example if they are too busy, or in the middle of something that requires focus.

Productivity

I've very much appreciated the free tier Copilot has introduced (recently?). Previously it was paid-for and I had only had access when being added to the github for agency roles. However, now with my own account I'm realising how powerful it can be. I use it a lot for 'mega-autocomplete', writing example functions based on what I start typing and in context of the file I'm in, adding repeating/similar blocks just using the tab key, it just keeps the flow a lot when writing code.

I like the chat feature as well (using ChatGPT under the hood I _think_), as you can provide the context you want it to answer question from, file, folder, solution etc. And it can then apply the fix it suggests which you can easily undo or ask for a different one. Very similar to just using ChatGPT but slightly less prompts to establish the structure of the code and project you're working in, and direct fixes in you file.

Upskilling

I'm currently doing some upskilling in TypeScript and React, due to industry demand in the roles I'm looking at. I'm going through some courses for these, but also with doing some freelance project work using ChatGPT to help with some of the bits that my skillset doesn't extend to is very helpful. I especially like how you can coax more information, better solutions and explanations and how and why the code does what it does.

Having the explanations and seeing examples from the courses applied to the code I'm working on has been great for assisting with my learning and recognition of the patterns.

But I have niggling concerns

Whilst I'm finding the increased usage is improving my workflow, productivity and understanding, there are a few downsides that I just can't help shake and are at the back of my mind when I use it.

Environmental impact

The negative impact AI has on the environment is well known and documented. As somebody who cares about the environment, this is something that plays on my mind when prompting AI, especially if I'm using it for pairing and needing it to go into more detail and try and optimise and clarify solutions. More than the responses, a huge amount of power is needed for the data learning stage of the models.

Poor output

Something I've felt since first using AI is that it will always need human experience and review to ensure the best output. A former colleague once described AI as a "confident bullshitter".

A former colleague once described AI as a "confident bullshitter"

I've seen plenty of div soup output when using it as a starting point for components. Especially with copilot. the issue being that it's not always the best examples and such the models learn from. You can usually get better output with a couple more prompts or a better written one to start with, but it would be nice if it could do this to start with.

Individuality

Whilst looking for new roles, I've seen a lot of advice around using AI to rewrite posts/articles/CV bits, and a lot of tools and software we use now have AI assistants and chats built in (I've even seen them on a greetings card site - nothing says personal like an AI message, hey).

My issue with replying on AI to rewrite things, is that yes, it might sound a bit more professional, it might structure some sentences and paragraphs a little better, but it stops sounding personal, like me.

It becomes easy to spot when a body of text has come from AI, such as using z instead of s in words if writing for English Uk audience, and heavy uses of the em dash (—).

During the application process for civil service positions, there is now included a guidance page on what is acceptable use of AI in GOV UK job applications.

The large language models need a lot of data to train on so they can give suitable and in-depth responses. Some of the information they consume can be copyrighted and pirated.

Joy

Just because you can

Further reading