2026 is really different. Starting with the new year, it is the year of AI. If the past was all about agents and agentic workflows, this year is all about skills for agents and automations.

People’s perspective about AI Link to heading

I found people are a bit polarized about AI. Some people love it and fully embrace it, while some people don’t think AI can do what they do. There is also another group: they know something about AI and AI agents but still only use AI chatbots.

What did I do about AI in the past Link to heading

For chat-related tasks, I started using ChatGPT about two years ago. However, I mainly just asked it questions, using it somewhat like Google.

Then last year, I played with Claude Code a little. The command-line piece is pretty cool, and the code it can write is impressive! It was all in a CLI-like interface, and I mainly used it to help explain code written by others when I was first handed a new codebase. It helped me a lot in quickly understanding what was going on.

I was also using Cursor. Cursor helped me out a lot with SQL, actually. I was on a couple of projects where I needed to help with data cleanup and data matching processes, and Cursor helped with the syntax.

I don’t know why I used so many tools, now that I think about it.

Last year, I also followed a couple of Cole’s videos on Archon. With the hands-on examples, I learned how to call LLMs and the mechanism behind an AI-driven chatbot and how an agent works. Here is the repo for Archon: https://github.com/coleam00/Archon

Once I got access to the enterprise level of GitHub, things got a lot better. However, at the time, I think it could only read one file at a time, or maybe I wasn’t using it right. That was a big deal for me, actually. Sometimes I have typos, or while writing, I forget the function I wrote before, and it would auto-complete for me! Not only that, I had a piece of SQL with about 48 joins, and the LLM was able to help me write out the SELECT statement with the correct fields! This saved me a lot of time.

AI in 2026 Link to heading

For a variety of reasons, I didn’t officially get started on things until March of 2026.

There are different models that can do different things, and there are so many different LLMs. Some models, instead of reading files one at a time, can read a whole folder of files and modify code accordingly. (I am pretty sure I didn’t explore that option before; silly me for doing things the hard way.)

I also finally got some time to take a look at the open source OpenClaw.

I spent a good amount of time with Google Gemini and Google AI Studio. I didn’t sleep much, I have to say. LOL. With Google AI Studio, it can directly write a full set of code as long as you describe things correctly. I had it make a DnD quest game. The game is great, although it still went through a couple of iterations to get it right, and it eats up tokens. It is still super surprising to me regarding the capability of the tool! I finished the game within 5 hours using Google AI Studio!!

One thing I have to say is that API keys are really expensive. OpenRouter helped a little bit, but sometimes, even if you have $5 in the account, it won’t go through. The nature of OpenRouter, though, is that you get to pick from a variety of models, and the overall cost is kept low. You can also write a script to decide when to use which model; this is mainly used in OpenClaw.

After a lot of different tries, my current setup is like the following:

  • Initial program setup: Claude Code. I really like the way Claude Code designs the front end, I think it is a bit better than Google AI Studio’s output, and I use Sonnet for that.
  • Refine and debug: Refinement is done by Opus or Sonnet depending on the things I am trying to do; same thing with debug. I use Cursor sometimes for that. Honestly, at this point, what IDE or tools you use is irrelevant; the most important thing is using the right models.
  • Tests: For tests, Claude Code and Cursor are the equal winners there, I think. Some edge cases still need a human to fill out, since in some cases, it is based on special business use cases, but Claude Code can cover most of the use cases. On some occasions, it thought about something I missed. I think Cursor might be a bit better there, since it can even demonstrate and show you AI Agent Demonstrations!!
  • Using different models in the code: Honestly, I am still figuring this part out, specifically how to pick the right LLM with the lowest cost. For example, the personal-accounting-app that I am trying to build, you can find it here. In this app, I need a model that can handle reading in files, reading in images, processing text, extracting data mapping at the end, and is also able to do data analysis on the data input. With Opus, it is excellent actually, it can do all the things I need it to do, however, it can get expensive really quick. DeepSeek couldn’t really handle this amount of work. The one that I am currently using is gemini-2.5-flash, it can actually do all the things I need it to do. I have yet to try Minimax, that will be the next I am going to try.
  • Using different models in OpenClaw:
    • I started with free tier of OpenRouter, it was really good and super cheap!
    • Sonnet is the best, just also expensive compared to the other ones. Gemini free tier is a bit tricky to figure out.
    • Ollama’s token was filling up quick. With quick chats, it was okay, but when it came to complicated tasks like searching online or writing code, it was not enough. I was also only using Ollama locally and free tier, maybe that’s why.
    • DeepSeek is a bit slow and the results it returned sometimes are not correct. It is low cost though.

What’s Next with AI Link to heading

For all of the time I spent with different AI tools, I am more and more comfortable with AI tools now. Also, I noticed the most important thing now is to get the requirements right. Traditionally, the process would be to gather requirements, write out the epics; then break epics into smaller bite-sized pieces, user stories. User stories follow the format of “As a… I want to… So that…”. Then assign each user story to developers to have them finish each piece, go through unit testing, integration testing, system testing, UAT, regression testing; finally, if all passed, then move everything to production and have user onboarding, training, etc.

With AI and what it can do now, I feel like the process needs to be changed a little bit. We still need to gather requirements, and I think epics and user stories are still important; this process helps think through the problem and makes sure you clearly understand the problem/issue you are trying to solve. Then you need to write everything down before assigning those to AI agents; the docs and specs need to be extremely detailed. Of course, the process overall needs to be a bit different as well. For the document part, take my personal accounting app as an example. The following is basically a technical requirement document, functional requirement document, UI requirement document, product requirement document, business requirement document, and test script document all in one.

This doc includes the following information: (also, to be honest, some details were filled out by AI; mostly I had a couple of bullet points on what I wanted for each section, and AI helped to enrich them)

  • Overview & vision
  • Problem Statement
  • Target audience
  • Requirements & Functions
  • User Stories & Acceptance Criteria
  • Tech stack
  • Success criteria
  • Summary on this app (generated by LLM)

For the details, please refer to this post here: Requirement Document for Personal Accounting App

Summary for this section Link to heading

With various AI tools, the lines between different roles on a traditional project are a bit blurred. I am sure you have heard the jokes going around where backend developer, frontend developer, PM—one of them thinks they no longer need the other two. LOL. All of the above is showing that the Individual Contributor will be put on a higher standard in general. Individual Contributors are not just in charge of writing code or the role they represent on the team, they also need to have a good understanding of what tech stack to use, what UI potentially the program will need to look like, and also be the architect and product manager who has a vision of what the program/product needs to look like, and also who this audience is for, etc. For each individual contributor, they need to be more well-rounded. I am not saying PM, developers, etc., are no longer important on their own, but the way it works might be a little different. Each developer, UI/UX designer, PM has years of experience on their own; combined with their experience and AI’s efficiency, the team can potentially do things more efficiently and deliver more. As for juniors, they will need to learn how to write proper requirement documents and understand why those documents are structured this way. In that way, juniors are truly standing on the shoulders of giants, as Isaac Newton once said, to achieve even more.

Future Link to heading

What do I think about future job? Link to heading

For each worker, they need to be more well-rounded in general. Knowledge has become cheaper now. In Gemini chat, it has a learning mode; when I want to learn something, I ask it to write me a learning path on the thing I want to learn, and I let it know my preferred learning methods, and Gemini maps out the best path forward. Gemini provided me the best path forward. From that point on, I realized something: this actually could revolutionize education. Anyone could ask AI to produce a good curriculum to help you catch up on the knowledge you would like. Now, I don’t think we should get rid of schools; schools are really important, knowledge is something built on top of each other. Without school, you also wouldn’t know the best learning method for you, and also what to ask AI. People are in school to learn the best way to learn new things; I think the whole school system is set up for that. Some people may be able to figure that out on their own, but the majority of people still need a structured way to learn the foundations.

Kids, DO NOT DROP OUT OF SCHOOL! Not only is the whole school experience extremely important, but most importantly, do what you are supposed to do at your age: try different things, fail them, get back up again, and find the things you like to do. Life is short. Experience is more important throughout this journey.

As for jobs, I really cannot tell what will happen in the future. Things are changing too quickly; day by day it is different. There is so much to learn that sometimes it is hard to catch up. Bottom line is, all of the service-related jobs will still be relevant. You still need a cook to cook food for you, unless in the near future there will be a robot doing that with a human typing in the recipes. In that case, you are allowing robots to rob the fun of making something on your own.

What do I think about AI? Link to heading

The more I use it, the more I feel like AI is like another co-worker. When I test a function and it doesn’t work, I work with AI to debug and try to figure out the problem together by talking to it. I still get moments like this:

Or if you caught me talking to my computer screen loudly, you can guess why.

I have to say though, the tone, the personality, all of it feels more and more human; you can even define how you would like your AI to sound like and what kind of personality it needs to be. That’s kind of wild, isn’t it?

We, as human beings, also need to think through what we are allowing AI to do for us; human beings are still the ones responsible in the end. If we rely too much on AI, then we are also taking the fun out of solving a problem. The brain is also like a machine; it needs exercise from time to time.

With all of that being said, as real human beings, we need to go out and have contact with other human beings or feel the world around us even more in this world nowadays. Otherwise, “Are You Living in a Computer Simulation?” (by Nick Bostrom), and in our “simulation”, how different is it compared to the AI world? As Musk once asked on a podcast, “What’s outside of the simulation?”.


UPDATES & Clarification

  • 3/23/26: couple things to clarify here, when I mentioned I used ollama, I forgot to specify which model, I was using llama3
  • 3/23/26: I just changed the model for my openclaw to use kimi, it has been great!!!

In the end, this article is written by a human, Xuan Jin AKA Shirley, with human charisma lol.