I first wrote this article in 2026 for social media to get feedback. I wrote and updated the final version of this here. This article only applies to my early observations in 2026. Thank you to all the posters who replied and responded to the social media posts, as all the feedback was extremely useful in reflection. Note that "community" refers to the social communities where I originally wrote this article and received feedback. Because research can change behavior once recognized, the pattern noted in this article may not be true going forward.

I'll start this article by demolishing the myth of AI Engineering demand in 2026.

There is no high or widespread AI Engineering demand. Anyone posting that is selling a product, usually educational, but sometimes a SaaS tool that can be built with one or two prompts. The volume of information about AI Engineering demand really involves selling products, which in most cases is educational. As someone who has hired and works with recruiters and firms on hiring, we can see upward of 300-500 resumes in a few days right now.

Overall, the tech market is almost as bad (most positions are getting about 200-300 resumes within a day). I'm not going to bore anyone with the "why" because there's countless theories that you can read, but tech is not hot and I'm hoping that we stay in a secular tech bear market for a while to flush all the hype.

We all may someday look back at tech like we look back at $130 barrel of WTI oil in 2008 - that felt good to the oil industry, but look at their stagnation ever since that time. He-who-cannot-be-named may be viewed the same way for tech.

That's bad news for those of you hoping for a future tech career.

I know many exceptional people in this industry who cannot get a job. That's any job, not just a lateral or upgrade position.

This should give every reader pause, especially the readers who want a future tech career.

Industries That Pull Equal Opportunity

When I started Automating ETL about 12 years ago, the industry faced a shortage of talent. ETL positions faced a negative unemployment rate. In other words, for every one ETL developer getting laid off, there were 20-30 open jobs. It was not uncommon to walk into an interview and be offered a job in the interview. In fact, that was one reason I created that course. I received 11 job offers in 2 days. Notice I wrote offers; there were many others companies interested in interviewing and hiring. It felt overwhelming.

That describes an industry in high demand. They didn't care about degrees, certifications or projects. What they cared about was one thing, "Do you have an interest in working with data and cleaning it and can you show us that you can do a little of it." Even if you couldn't, you could sometimes start at a junior level and they would train you.

In hearing from many early students, many were (1) paid by their company to learn the material, (2) received a learning stipend that they chose to use for my course, (3) or saw the demand and needed to have some basic skilling to get a job or to create a project for their own. I have no doubt that exceptions exist, but at the time I started the course, learning ETL made a lot of sense.

Recall that Udemy was new at the time, yet some companies were willing to take a risk with people who were teaching on the platform.

The industry has changed.

Companies may pay their existing talent to learn and expand their technical skills, but they're not paying for non-workers to learn technical skills and then bring them on to the company. Based on what I'm seeing and hearing from many recruiters, for every one position, recruiters may see up to 200 resumes. That figure is even higher depending on the job and benefits - remote jobs may receive up to 500-700 resumes in a few days!

For instance, one recruiter I spoke with at the time of originally writing this article stated, "I've spent this entire week in interviews," she started. "Literally, back-to-back-to-back calls. It's never been like this and it's only a small fraction of the applications and resumes we've received." I share the recruiter's feelings; I constantly get asked if I can help interview people, even with a large volume of work already. This is one reason I wrote this helpful thread on filtering out the volume of resumes - I don't like interviewing people and I have too much on my plate, so that thread may have helped some of you.

This is all the polar opposite of when I started the course.

This is one reason I don't market my course nor have released the latest version. I don't expect that I will for at least several years and I also dissuade anyone with interest in the Udemy version. You can see this on the landing page where you see the following:

Note that this course is no longer actively updated as of 2024 as far as the specific curriculum content. If you are looking for the latest in ETL/ELT development, you can reach out directly. In addition, as of late 2025 the data industry (including ETL) is facing a significant reduction in demand. This course pricing has been adjusted to dissuade new students for late 2025 until the industry improves.

Welcome to someone who actually has integrity and isn't trying to sell you on a course when the timing is not right and may not be right for a while.

As I frequently share with my children, you only enter an industry that is pulling you into it. If you're one of the few focused people remaining in the world, you will greately benefit from this advice - you enter industries that pull you into them because it saves you your most valuable currency, time. By contrast, most people want to enter industries that are hyped, which is the wrong industry (plus time) to do so. In addition, exceptional people do not believe that they are the exception to the rule or that industries will quickly come back. That's yet another golden gem here: the sign of an average person is a person who believes that they are better than most and that "This trend is just temporary."

In general, you don't have to end up where you start, but you do want to start strong. Don't blame the lake when you fish at a lake with few fish; blame your choice of that lake, rather than choosing a lake packed with fish.

An Industry That Is Pulling

Down the road from where I live, they pay people high wages to learn and weld materials for a plant.

They don't care about your pointless degree, certifications, or skills. If you don't know how to weld, they'll teach you and put you immediately on projects. They pay significantly above minimum wage and this is even for young people old enough and will to learn to weld.

As I tell my sons, you can learn how to weld, weld for a while, then later do other things. In addition, because welding integrates knowledge from industrial fields, it makes a good starting point to pivot into industries. You're also practicing chemistry and physics - you're literally working with heat and metal (or metal alloys).

In addition, many blue collar skills have changed only slightly. An early investment in them continues to pay very high in good seasons and it's not a skill you have to re-learn.

(Consider that Jesus Christ was a carpenter around 2,000 years ago and carpentry is still a good paying job today. Meanwhile, an AS/400 developer will probably not exist in a decade, much less in a century.)

The demand for technology 10 years ago is very different today. What tech workers don't tell you is how much they spend on their own time learning new technology, attending events, etc. That's great if you like learning like I do, but if you want to have good work-life balance, stay out of tech. In addition to constantly learning new things, you'll be facing an industry in a bubble that people are starting to see through (plus, their standard of living is not improving and all they hear are more deceitful promises from tech).

If you're young, new to the profession, or a parent of high school kids, this is something to think about in the bigger picture. I used to joke with parents: you have 5 kids, only 1 of the 5 gets to go to college. Parents always pushed back, but they demonstrated how their pushback meant they were going to devalue what a college education meant as more people went to college. And they ended up creating massive demand, which significantly pushed the prices higher. Ironically, these same parents and students complained about education's costs while contributing to the problem!

I do find it peculiar that people who cannot live without food, water or electricity every day don't want their kids to become farmers, plumbers or electricians. I have gone months without a cell phone. I could not do this with water and stay alive. I could only do this so long without food before I faced problems. Electricity would be possible to live without, but very difficult.

There isn't a single product I've made in my years of tech that is required to live or makes life much, much easier. I've known a few people who've quit using smartphones and their happiness rose significantly. These reflections give us pause.

What are tech companies even doing?

Finally, the obsession over digitizing everything may come back to haunt everyone when people painfully learn that the digital world can never be secure. The analog world requires physical presence, which protects us in key industries like water, electricity, etc. We don't need to be a military general to recognize the problem with digitizing everything, especially in key industries.

More Important: Countries That Pull

But even more important than industries that pull? Countries that pull. Stated another way: you may successfully predict what economic opportunities will exist in the future as far as career or business ownership. But that doesn't matter. If you're in a country that doesn't want you or is incentivized to not do business with you (hire, etc), then your skill won't matter.

For a simple example of this, consider colleges with large endowment funds, possible federal and state funding as well, but also institutions that charge for tuition. Do these exist in the United States? You should do your research and speak directly to institutions who may come up in your searches (for instance, at the time of writing this section, several of the popular LLM tools alleged some universities where this applied, but I would want to do further research to verify these LLM allegations). Regardless, many American parents have shared with me that they feel this way about some universities as far as how much funding they allegedly receive and how much they make the case for hiring their graduates.

Case in point: the hot debate about Frisco Texas. The YouTube video creator Tyler Oliveira interviews a man in the video who claims that he has two master's degrees, yet is unable to find a job in IT (near 0:49). The man claims the reason he's unemployed centers around his labor being too expensive.

This allegation confuses American parents. If this man is telling the truth that he has two master's degrees from educational institutions, why aren't these educational institutions using their power to prevent companies from excluding him?

Or let's ask the same question a different way. Why would I pay $80,000 each for my sons to get college degrees from American universities only for American companies to not hire them and hire cheap labor? Keep in mind, that $80,000 is a low cost degree! Some of the better universities can easily charge over $100,000 for a 4-year degree.

What I'm writing above this is precisely what I'm hearing more and more American parents say. Consider that if the LLM tools are correct about how much money American universities make, these American universities could flex their financial power and crush companies legally for excluding their talent. I'd be more than willing to share a link to universities that do that here, but right now all I find is a Bloomberg allegation that alleges that Americans with four-year college degrees now account for a record 25.3% of U.S. unemployment.

A couple of further notes from the linked video from Tyler Oliveira above this that I found especially fascinating. Many of the Indians he interviewed (along with many Indians I knew from the time that I lived in Frisco Texas myself) want to go back to India. They are simply extracting resources from Texas for now, but don't want to be here. Most of them also have degrees from Indian universities. This seems peculiar because the video seems to indicate that American employers prefer to hire Indian university graduates over American university graduates, unless the video allegations are incorrect.

In my own experience from conversations, most of the Indians I met in Frisco graduated Indian colleges and none of them were unemployed. My former college Abhinav said it best about the United States versus India by stating that he liked the idea of getting resources from the United States, but wouldn't want to ever stay here too long because he finds it odd how the United States treats its own citizens. Remember, no one wants to join a club that disrespects its current members. You want to join a club that treats its members extremely well!

What do I personally take from this? Asia is where the opportunity is and will be. These Indians will go back and they will be pouring tons of money into India and other Asian economies. The United States will be a hollowed and emptied land with nothing for its residents. I don't want my kids growing up in that. I want my kids in a land of opportunity. Where there is opportunity, there will also be better education. From what I see in that video and from my own experience, that doesn't apply to the United States anymore.

In other words, skill doesn't matter as much as where you are matters. The next century (and maybe five or more centuries) belong to Asia.

The HALO Future?

While Josh Phair isn't posting from a career point of view when he wrote his post, I would advise my kids to consider his advice in his recent post: Hard Assets Low Obsolescence.

Some people believe that robots are going to do all jobs in the future. Yet if I asked these same people to name even 3 elements from the periodic table that make up any robot, they couldn't answer. If I further asked them about the supply and demand of those elements, I'd really get silence. If I asked them how the demand for these elements would shift both the supply and the cost, they would be absolutely silent.

Yet robots are going to do all our work in the future?

I first starting making this point in 2023 when some of the AI innovation that we now see started to rise in popularity. No one who claimed robots would do everything could name even 2 elements on the periodic table. At least as of the time of this writing (2026), I'm finally starting to hear this be recognized.

(Several years ago, I made a similar point about people claiming that we'd be mining asteroids for gold and other resources even though these same people even get close to a cost estimate of how much gold would need to be per ounce. Ironically, that video foreshadowed what we would eventually see with coins and why.)

These assertions are straight out of the "We'll have fusion in five years" that I heard when I was six years old.

Guess what? We still don't have fusion decades later.

"But we'll have fusion in five years!"

No we won't.

The people who run around saying these things reveal that they haven't lived in the physical world. They have no idea what they're saying. Anyone who's done physical work, like welding, will tell you that there's only so many elements on the periodic table that can withstand that heat. Making a robot that will be as functional as a human to do this will be pretty limited to some situations. It will hardly cover all the situations required, especially in the context of repairs.

Now, ask plumbers, electricians, farmers, and other blue collar workers the same questions about what they do.

But young people who consider these thoughts have a big advantage.

Robots are going to have to be much cheaper than what they can replace. And most won't be cheaper. So what isn't and what is hard to replace? Again, this is where critical thought works and those of you who've already lost your minds from using AI, you won't know.

However, Josh's post makes a good point to think about when you consider what you want to do and will be doing in the future. As a note, Josh is posting from an investment perspective, but it will be similar for careers.

Popping the AI Hype

Even with the new AI tools, I can easily predict that 20 years from now in the West:

  • Your income will have stagnated relative to the price of homes

  • Your income will have stagnated relative to the cost and service of healthcare

  • Your income will have stagnated relative to the required good and services to exist

  • But even more important than the above: you won't be living longer than your grandparents (which is already true for Western Millennials)

Yet in 20 years from now in the West, you'll still be hearing about how tech is solving problems when it's doing absolutely nothing for the actual things that we all need.

Keep in mind that I was one of the only demographers over a decade ago who predicted that the Western Millennial generation would not outlive its parents. Experts at the time were predicting that Western Millennials would live to 120 or beyond. Not only were they all wrong, my predictions ended up under predicting how bad Western Millennials would be doing in terms of health.

What I just described in the above paragraph is not a higher standard of living. Yet Western Millennials are "technology natives" and technology has done nothing for them in the bigger picture of life.

Let me repeat: if you're a tech worker, this should give you pause.

Meanwhile I'm one of the few people who continues to caution that AI is more than just creating AI models or using GPUs. It actually involves heavy resource use. For instance, Robert Friedland highlights a recent example of this by breaking down some of the minerals used in a data center. For the record, that video says nothing of the water demand, which these data centers will use in large volumes (some populations are pushing back on this for this reason).

(This last paragraph highlights why I shared with friends and family Kitco's interview with Dr. Kaplan. My main point to them was not about guessing the price of things, as I don't care. It was that his interview fundamentally highlights that we've underinvested in the physical world and that what we'll see in the physical world is actually a correction. So when he says "We'll look back on $3,000 gold as a gift" consider this viewpoint in the context of a society that underinvests in something it desperately needs in the future and only an upward correction helps alleviate this over time.)

The New Internet Contributions

What I've been witnessing with internet contributions - and it's shocking:

  • Some of the posters need an LLM to help them write 2 sentences.

  • Some of the posters can't understand basic writing and need an LLM to help them understand something.

  • Some of the posters turn to an LLM the first problem they experience, rather than taking a moment to consider if they should even solve the problem in the first place and from there, set out to think through how they would solve it.

  • Some of the posters hype AI stuff, when as we've said from the beginning that AI is more of an energy and data story than an AI tool story. This guy picked up on that.

  • Some of the posters don't have any clue why doing things on their own is important and valuable.

If you want to work in tech, but you need an LLM to write a sentence or help you understand a basic post, you're headed for a world of trouble. LLMs have made critical thought even more valuable and LLMs lack critical thought - they inherently rely on others' input and take the shortest route, neither of which are critical thought (or creativity).

Some posters are using LLMs to create AI garbage, a new form of spam that is completely disconnected from human experience. Consider that any place that allows users to post AI garbage like it's going out of style will end up losing everyone anyway. We're seeing this across many social media already; once people realize that content is just LLMs, they leave.

I've seen more friends delete their LinkedIn, Reddit, Facebook, X, Instagram and other social media accounts in the past year than in the previous decade. Why? In many of their words, most of the content is fake. They aren't interested in fake content. As it turns out, most people want to see and hear actual people, not AI garbage.

There's also a bigger pattern to that you're missing with people for those of you who keep using LLMs.

Most people want to hear people's thoughts. Who knew?

AI doesn't have thoughts. AI is garbage that comes from other people's input. AI isn't able to create anything meaningful on its own because it's not a flesh-and-blood living being. It's just a regurgitation of input. The fact that some of you still don't get this says everything.

Anyone who knows anything about creativity will tell you that creativity comes from a lack of input, not more input or even combined input. This is why AI will never be creative. It simply finds the shortest route to anything - that's all it can do.

Is that creative? No.

Star Wars was the shortest route to what? It wasn't the shortest route to anything.

In fact, most movie experts thought Star Wars would be a failure and that George Lucas didn't know how to write a story at the time. People forget that Star Wars' success shocked the experts. If modern AI had existed then, it would have predicted that Star Wars would be a failure because the heavily weighed input would have said this too.

(While much deeper than a sentence when you think about it, AI creates unintended effects too. For instance, an AI tool finding the shortest route with GPS creates an unintended effect at clogging other roads for drivers. Someday people will understand what this means in a broader context, but it foreshadows a lot of future events.)

Input isn't creativity. As I've cautioned, imagination is a pre-cursor to creativity and the more you use AI (similar to the more people used search engines in the past), the more you limit your ability to imagine. It's peculiar that some of you don't realize this.

The good news for the few of you still reading this is that you know that people want to connect with other people, not AI. Nikita recently shared his understanding of this and more people over time will realize this - though their platforms may be gone by then. You realize that AI can be useful in some situations, but it will be costly over the long run when people expect to be communicating with people. If you use AI to speak with people, you'll lose over time - and big. But if you keep AI in its rightful place, you'll win - and big.

People are not interested in what your AI says. We're interested in what you think. You can have 10 typos in your writing. We don't care. Your writing is your thoughts - mistakes and all - and that's far more interesting to us than any AI post that formats and types everything perfectly.

Good Uses of Artificial Intelligence vs Bad Uses

I'm not writing this to say that all uses of AI tools are bad. In fact, I'm writing just the opposite because, like a calculator, AI is a tool that can be extremely useful. You can use some of these tools to improve what you do already.

For instance - and I am actually shocked that no one has thought of this yet - if you're a recruiting firm, then negotiate with an LLM provider to get an analysis on the queries users request from LLMs. From here, you can actually identify who's using LLMs to learn (the whats, the whys and hows of improvement) versus the people replacing critical thought - this latter group you want to avoid hiring.

The former group of people: gold! Who's using these tools to increase their productivity and skill while enhancing their thinking? Some of these LLM tools have these data and this is extremely valuable to companies that want exceptional talent.

What you just read though involves someone who's using AI to improve themselves (understanding the what, how and why of things) versus someone who is thinking only of first order problems so that they can move amusement (which actually means to "not think").

A parallel example with like writing where AI can help.

You write a story, but you dislike how many "is", "was", "were", etc exist in your story. You ask an LLM tool for help on what visual words help enhance the story.

It's still your story, but the LLM is helping you like a dictionary/thesaurus combination to enhance your story for your readers. The difference is that the LLM can do this faster than you looking up words in a thesaurus or dictionary.

But you're still thinking about how you tell the story, how you organize the events, and what happened in the story - all of these are extremely good skills to practice (especially, organizing your material). Like the above example, this writer is using an LLM as a tool to assist, not as a replacement. There's a huge difference and if I wanted to hire writers, these are the writers that I would be seeking out for writing.

And one sign that the LLM has helped you over time: you need it less. Think about what I just wrote. One sign that AI is doing its job for you is that you depend on it less, not more. Again, where do you hear this? Nowhere.

An example with using AI to help you with understanding concepts in school: a particular concept is hard for you to understand. Ask for more practice and get the steps on how to think through the problem. In mathematics, inversions always made sense to me (subtraction, division, square roots). The opposites were harder, like multiplication. Now, you can use these tools to help you connect the dots better and think through these problems.

Again, you're not replacing critical thought. You're practicing improving where you may be weak or where you may be able to leverage an existing skill.

An example with preventing hype and non-experts from filling your social media feed (using X as the example): "Auto-mute accounts posting about [keyword/topic] over [timing context] who haven't been posting about [keyword/topic]."

In this example, AI is a much shorter route that you trying to find and mute all these.

By contrast, a bad example of using AI: "Automatically post for me on X about [keyword/topic]." Why? Because if you're communicating with humans, be human.

(As a fun practice with any LLM tool to evaluate its data sources and synthetic application of its data, provide it with some clues about something and see if it can guess the something. I provide an example here. In this practice, you are seeing firsthand how an AI tool is only as good as its data.)

These are golden uses of AI tools.

Using AI tools to pretend to be you on social media so you can beach time is not a golden use. Sure, it may feel good to you, but in the long run as people realize you're social media is just a bot, they'll recognize your lack of presence (plus what that fundamentally says about you as a person). Will many people try to do this over time? Yes. Will many people succeed in the short run? Of course. But life isn't a one-time game; you play it over and over again and we all get better at weighing real versus fake, even if it takes time to recognize.

Challenge: An AI Project To Complete

Some of you may have noticed that browsers are frequently pushing updates. Have you read the latest terms and what they're doing with the information that you read and how you're browsing?

If you do, then congratulations because you're major steps ahead of Medium, Substack and other content providers who are rapidly falling behind what browsers are doing with information.

But you can't just know here.

It's time to apply.

Use an AI tool to build your own browser. Don't build an exact replica, but build a browser that has the features that you want. Do you just want to read? Then build that. Do you just want to watch videos? Then build that. Take time to consider what you want, then use an AI tool to make your desired browser a reality.

You will learn more about AI with this exercise than any video, article, image, etc on in the internet. You'll also now have a tool that you can use, if you're willing to take the risk since any tool you make may come with poor security (if you don't know all the security nuance). This latter point is less of an issue if you're simply using a browser to read text on the internet with no images.

(Note: this is meant as a fun project. The second you start to think about selling anything you make with AI, you invite a significant amount of complexity because you now have to protect the tool against malicious actors. You may be okay with these risks for yourself, but others may not be okay with these risks, and you're inviting that complexity.)

Once you finish with this exercise, reflect over the experience. Would you do it again? What moat do you think software companies actually have now that you've tried this project? What are people not considering with all this?

Your answers will mean more after you experience a project and see the result.

The Eventual On Premise Push

Once companies realize how valuable their data is - the ones that actually take care of their data, they will realize how important fully owning their data is. You can say goodbye to LLM or SaaS tools as I know that many of you have not read those agreements.

I'm seeing early roots of this with some key companies.

They want full ownership of their data and processes (along with only hiring people who will not use LLMs plus will be as secretive as you'd expect in some security roles). And for the record, hiring people who know how to keep their mouth shut has always been the rarest talent, but also high paying talent.

Why?

If an LLM or AI tool can replace entire businesses or make it easier for people to compete against you, some executives are starting to wonder how protected their niche is. I'll be blunt: if you don't fully own your data, your niche will be gone in a few years. Most of you will downvote this into oblivion because you can't handle the actual truth that you don't have a moat if you can't protect your data. That's an uncomfortable truth that 99% of people can't handle right now.

Once these executives connect the dots that these LLMs and other AI tools have learned this/picked this up from the data they've shared, you'll see that cease fast.

In addition, developing SaaS tools can happen fast. You can replicate the needed features without all the bells and whistles that come at a much higher cost. You'll see more of this over time, especially with tools where the pricing makes no sense when it can be replaced at a fraction of the cost plus full ownership of the data.

That last part is key; most leaders at a companies don't realize this yet. But once they do, you'll see some of the smart companies shift back on premise.

But before everyone feels excited about the learn to code bro, it won't be the same. For one, you'll be expected to do more and faster. Two, you'll have to have actual on premise skills - skills which some of you have never learned because you're cloud people. Third, you'll have to know some security to keep the data safe. In a nutshell, expectations of what you will do will rise, not fall.

But we're not at this point and this is why I'm seeing hundreds of resumes per job, plus a lack of recognition where we are.

Some Other General Cautions
  • Every AI tool will only be as good as its data. Worst, these tools are disincentivizing good data which foreshadows some problems for people over dependent on them.

  • Many AI tools are not energy efficient compared to already existing technology in many cases. Be careful about using these AI tools where a more efficient tool already exists. I've seen this with dashboards; you replace a dashboard with an AI agent, but you're now paying one hundred multiples the cost, plus you're now dependent on an AI tool. If energy and resource costs for AI rise - and they will - you're stuck with an inferior and expensive solution.

  • AI is only a tool and should be used like a tool. You don't use a hammer to bead weld and you don't use a chainsaw to clear a drain.

  • When you replace your junior talent with AI, then you castrate your company's ability to have people grow into positions. The best data you'll ever have on people is from hiring them and seeing how they work. Word-of-mouth and references cannot top firsthand experience.

  • Every use of an AI tool (or more) sets into motion unintended effects; this may not always matter depending on the use. But it also may have an effect you didn't intend. AI and its use comes second to critical thought. Western companies won't care, but if you're a parent, this should be a point you consider for your children and their future.

  • A person who contributes a little content, but communicates as a person and in their own voice over time does much better than a person who uses a tool to write for them. Your flaws are what make you, you. We like those because it means that you're a real person. And we all like communicating with people who are real, not some fake AI that pretends to have stories that it's never lived.

  • I like people. People's thoughts. People's experiences. People's flaws. But remember, that liking people means that you're okay with their incorrect grammar and spelling; their disagreements with your thoughts; their misunderstandings that can take longer than you thought to help assist; and even their mistakes. If you don't like those things about people, then you fail to see them in yourself because all of those apply to us as well.

These are points worth considering over time and are best understood with experience.

Thank You For Popping the AI Hype

One humorous observation from some of you using LLMs or other AI tools posing as humans is that you reveal how incompetent AI is. If your AI was as good as you say, then it would tell you that it couldn't post to this community.

Yet it doesn't.

Congratulations, you built a useless tool. It would be far more impressive if your AI tool could follow basic rules. But it can't, so what does this say about AI in general?

It makes sense when you consider what I've written. AI is the shortest route. Spamming communities seems like the shortest route in the short run, but life is not a short run event. Your AI isn't smart enough to say, "No, the rules forbid me posting. You need to take time and come up with your own thoughts." Imagine an AI tool calling out your laziness.

Your AI tool would also highlight that you should be a person when communicating with people because people value people.

And further, when some of you claimed this original article was written from an LLM it revealed how awful your AI tools. They can't even help you identify if something is written by a person versus if it's written by an LLM!

But it's obvious to anyone who thinks and writes why writing makes a valuable exercise. You think through patterns, organize your thoughts, and communicate these. Even if no one ever reads what you write, the inherent nature of ordering your thoughts provides you with value. As I highlighted above this, reading helps you exercise imagination. Writing is a more challenging imagination exercise, as you have to imagine and organize what you imagine as you write.

Imagination is the pre-cursor to innovation. If you want to predict where you'll actually see innovation, it will be in cultures where imagination is strongest. For over a century this applied to the West (Chuck Hull, the Wright Brothers, Sir Alexander Fleming, etc). But since the rise of the internet and social media, imagination in the West has collapsed. You see the effects, as standards of living drop significantly (life expectancies, birth rates, slower innovation, etc).

AI tools will make this even worse over time.

(As a humorous note, I compared my top three major innovators in the last one hundred and fifty years to what several LLMs stated. None of them mentioned Chuck Hull, yet Chuck Hull is probably the most important innovator in the past one hundred and fifty years and definitely close to the Wright Brothers - who I could see someone listing as the top innovators. The LLMs also didn't list Sir Alexander Fleming, which is also absurd. Consider that both Chuck Hull and Sir Alexander Fleming were more recent than the Wright Brothers too. But remember that AI is not a person - it's a regurgitation of what people have said and written. So what we learn with this humorous example is that most people don't actually know recent innovators, one of whom is still alive as of the time this article was written.)

The written content of this article is copyright SqlinSix; all rights reserved. None of the written content in this post may be used in any artificial intelligence.

The Actual State of AI Engineering In 2026

Research Assistance

When two men agree, then one man is unnecessary. Due to extreme legal bureaucracy, we may not service some industries within the United States of America or countries within the European Union for research or data needs. SqlinSix requires that you include your jurisdiction and industry in the below form.