I first wrote this article in 2026 for social media to get feedback. I wrote and updated the final version of this here. This article only applies to my early observations in 2026. Thank you to all the posters who replied and responded to the social media posts, as all the feedback was extremely useful in reflection.

I'll start this article by demolishing the myth of AI Engineering demand in 2026.

There is no high or widespread AI Engineering demand. Anyone posting that is selling a product, usually educational, but sometimes a SaaS tool that can be built with one or two prompts. The volume of information about AI Engineering demand really involves selling products, which in most cases is educational. As someone who has hired and works with recruiters and firms on hiring, we can see upward of 300-500 resumes in a few days right now.

Overall, the tech market is almost as bad (most positions are getting about 200-300 resumes within a day). I'm not going to bore anyone with the "why" because there's countless theories that you can read, but tech is not hot and I'm hoping that we stay in a secular tech bear market for a while to flush all the hype.

We all may someday look back at tech like we look back at $130 barrel of WTI oil in 2008 - that felt good to the oil industry, but look at their stagnation ever since that time. He-who-cannot-be-named may be viewed the same way for tech.

That's bad news for those of you hoping for a future tech career.

I know many exceptional people in this industry who cannot get a job. That's any job, not just a lateral or upgrade position.

This should give every reader pause, especially the readers who want a future tech career.

Industries That Pull Equal Opportunity

When I started Automating ETL about 12 years ago, the industry faced a shortage of talent. ETL positions faced a negative unemployment rate. In other words, for every one ETL developer getting laid off, there were 20-30 open jobs. It was not uncommon to walk into an interview and be offered a job in the interview. In fact, that was one reason I created that course. I received 11 job offers in 2 days. Notice I wrote offers; there were many others companies interested in interviewing and hiring. It felt overwhelming.

That describes an industry in high demand. They didn't care about degrees, certifications or projects. What they cared about was one thing, "Do you have an interest in working with data and cleaning it and can you show us that you can do a little of it." Even if you couldn't, you could sometimes start at a junior level and they would train you.

In hearing from many early students, many were (1) paid by their company to learn the material, (2) received a learning stipend that they chose to use for my course, (3) or saw the demand and needed to have some basic skilling to get a job or to create a project for their own. I have no doubt that exceptions exist, but at the time I started the course, learning ETL made a lot of sense.

Recall that Udemy was new at the time, yet some companies were willing to take a risk with people who were teaching on the platform.

The industry has changed.

Companies may pay their existing talent to learn and expand their technical skills, but they're not paying for non-workers to learn technical skills and then bring them on to the company. Based on what I'm seeing and hearing from many recruiters, for every one position, recruiters may see up to 200 resumes. That figure is even higher depending on the job and benefits - remote jobs may receive up to 500-700 resumes in a few days!

For instance, one recruiter I spoke with at the time of originally writing this article stated, "I've spent this entire week in interviews," she started. "Literally, back-to-back-to-back calls. It's never been like this and it's only a small fraction of the applications and resumes we've received." I share the recruiter's feelings; I constantly get asked if I can help interview people, even with a large volume of work already. This is one reason I wrote this helpful thread on filtering the volume out- I don't like interviewing people and I have too much on my plate, so hopefully that helps some of you.

This is all the polar opposite of when I started the course.

This is one reason I don't market my course nor have released the latest version. I don't expect that I will for at least several years and I also dissuade anyone with interest in the Udemy version. You can see this on the landing page where you see the following:

Note that this course is no longer actively updated as of 2024 as far as the specific curriculum content. If you are looking for the latest in ETL/ELT development, you can reach out directly. In addition, as of late 2025 the data industry (including ETL) is facing a significant reduction in demand. This course pricing has been adjusted to dissuade new students for late 2025 until the industry improves.

Welcome to someone who actually has integrity and isn't trying to sell you on a course when the timing is not right and may not be right for a while.

As I frequently share with my children, you only enter an industry that is pulling you into it. If you're one of the few focused people remaining in the world, you will greately benefit from this advice - you enter industries that pull you into them because it saves you your most valuable currency, time. By contrast, most people want to enter industries that are hyped, which is the wrong industry (plus time) to do so. In addition, exceptional people do not believe that they are the exception to the rule or that industries will quickly come back. That's yet another golden gem here: the sign of an average person is a person who believes that they are better than most and that "This trend is just temporary."

In general, you don't have to end up where you start, but you do want to start strong. Don't blame the lake when you fish at a lake with few fish; blame your choice of that lake, rather than choosing a lake packed with fish.

An Industry That Is Pulling

Down the road from where I live, they pay people high wages to learn and weld materials for a plant.

They don't care about your pointless degree, certifications, or skills. If you don't know how to weld, they'll teach you and put you immediately on projects. They pay significantly above minimum wage and this is even for young people old enough and will to learn to weld.

As I tell my sons, you can learn how to weld, weld for a while, then later do other things. In addition, because welding integrates knowledge from industrial fields, it makes a good starting point to pivot into industries. You're also practicing chemistry and physics - you're literally working with heat and metal (or metal alloys).

In addition, many blue collar skills have changed only slightly. An early investment in them continues to pay very high in good seasons and it's not a skill you have to re-learn.

The demand for technology 10 years ago is very different today. What tech workers don't tell you is how much they spend on their own time learning new technology, attending events, etc. That's great if you like learning like I do, but if you want to have good work-life balance, stay out of tech. In addition to constantly learning new things, you'll be facing an industry in a bubble that people are starting to see through (plus, their standard of living is not improving and all they hear are more deceitful promises from tech).

If you're young, new to the profession, or a parent of high school kids, this is something to think about in the bigger picture. I used to joke with parents: you have 5 kids, only 1 of the 5 gets to go to college. Parents always pushed back, but they demonstrated how their pushback meant they were going to devalue what a college education meant as more people went to college. And they ended up creating massive demand, which significantly pushed the prices higher. Ironically, these same parents and students complained about education's costs while contributing to the problem!

I do find it peculiar that people who cannot live without food, water or electricity every day don't want their kids to become farmers, plumbers or electricians. I have gone months without a cell phone plus recently quit using a smartphone. I could not do this with water and stay alive. I could only do this so long without food before I faced problems. Electricity would be possible to live without, but very difficult.

There isn't a single product I've made in my years of tech that is required to live or makes life much, much easier. I've known a few other people like me who've quit using smartphones and their happiness, like mine, rose significantly. These reflections give us pause. What are tech companies even doing?

Finally, the obsession over digitizing everything may come back to haunt some of you when you painfully learn that the digital world can never be secure. The analog world requires physical presence, which protects you in key industries like water, electricity, etc. You don't need to be a military general to recognize the problem with digitizing everything, especially in key industries.

The HALO Future?

While Josh Phair isn't posting from a career point of view when he wrote his post, I would advise my kids to consider his advice in his recent post: Hard Assets Low Obsolescence.

Some people believe that robots are going to do all jobs in the future. Yet if I asked you to name even 3 elements from the periodic table that make up any robot, they couldn't answer. If I further asked them about the supply and demand of those elements, I'd really get silence. If I asked them how the demand for these elements would shift both the supply and the cost, they would be absolutely silent.

Yet robots are going to do all our work in the future? I first starting making this point in 2023 when some of the AI innovation that we now see started to rise in popularity. No one who claimed robots would do everything could name even 2 elements on the periodic table. At least now (2026), I'm finally starting to hear this be recognized.

These assertions are straight out of the "We'll have fusion in five years" that I heard when I was six years old.

Guess what? We still don't have fusion decades later.

"But we'll have fusion in five years!"

No we won't.

The people who run around saying these things reveal that they haven't lived in the physical world. They have no idea what they're saying. Anyone who's done physical work, like welding, will tell you that there's only so many elements on the periodic table that can withstand that heat. Making a robot that will be as functional as a human to do this will be pretty limited to some situations. It will hardly cover all the situations required, especially in the context of repairs.

Now, ask plumbers, electricians, farmers, and other blue collar workers the same questions about what they do.

But young people who consider these thoughts have a big advantage.

Robots are going to have to be much cheaper than what they can replace. And most won't be cheaper. So what isn't and what is hard to replace? Again, this is where critical thought works and those of you who've already lost your minds from using AI, you won't know.

However, Josh's post makes a good point to think about when you consider what you want to do and will be doing in the future. As a note, Josh is posting from an investment perspective, but it will be similar for careers.

Popping the AI Hype

Even with the new AI tools, I can easily predict that 20 years from now in the West:

  • Your income will have stagnated relative to the price of homes

  • Your income will have stagnated relative to the cost and service of healthcare

  • Your income will have stagnated relative to the required good and services to exist.

And in 20 years from now in the West, you'll still be hearing about how tech is solving problems when it's doing absolutely nothing for the actual things that we all need.

Keep in mind that I was one of the only demographers over a decade ago who predicted that the Western Millennial generation would not outlive its parents. Experts at the time were predicting that Western Millennials would live to 120 or beyond. Not only were they all wrong, my predictions ended up under predicting how bad Western Millennials would be doing in terms of health.

What I just described in the above paragraph is not a higher standard of living. Yet Western Millennials are "technology natives" and technology has done nothing for them in the bigger picture of life.

Let me repeat: if you're a tech worker, this should give you pause.

Meanwhile I'm one of the few people who caution that AI is more than just creating AI models or using GPUs. It actually involves heavy resource use. For instance, Robert Friedland highlights a recent example of this by breaking down some of the minerals used in a data center. For the record, that video says nothing of the water demand, which these data centers will use in large volumes (some populations are pushing back on this for this reason).

This last paragraph highlights why I shared with friends and family Kitco's interview with Dr. Kaplan. My main point to them was not about guessing the price of things, as I don't care. It was that his interview fundamentally highlights that we've underinvested in the physical world and that what we'll see in the physical world is actually a correction. So when he says "We'll look back on $3,000 gold as a gift" consider this viewpoint in the context of a society that underinvests in something it desperately needs in the future and only an upward correction helps alleviate this over time.

The New Internet Contributions

What I've been witnessing with internet contributions - and it's shocking:

  • Some of you guys need an LLM to help you write 2 sentences. Think about that.

  • Some of you guys can't understand basic writing and need an LLM to help you understand something.

  • Some of you guys turn to an LLM the first problem you experience, rather than taking a moment to consider if you should even solve the problem in the first place and from there, set out to think through how you would solve it.

  • Some of you guys hype AI stuff, when as we've said from the beginning that AI is more of an energy and data story than an AI tool story. This guy picked up on that and blew you guys out.

  • Some of you don't have any clue why doing things on your own is so important and valuable.

If you want to work in tech, but you need an LLM to write a sentence or help you understand a basic post, you're headed for a world of trouble. LLMs have made critical thought even more valuable and LLMs lack critical thought - they inherently rely on others' input and take the shortest route, neither of which are critical thought (or creativity).

Many of you are using LLMs to create AI garbage, a new form of spam that is completely disconnected from human experience. Any place that allows you to post AI garbage like it's going out of style will end up losing everyone anyway. We're seeing this across many social media already; once people realize that content is just LLMs, they leave.

I've seen more friends delete their LinkedIn, Reddit, Facebook, X, Instagram and other social media accounts in the past year than in the previous decade. Why? In many of their words, most of the content is fake. They aren't interested in fake content. As it turns out, most people want to see and hear actual people, not AI garbage.

There's also a bigger pattern to that you're missing with people for those of you who keep using LLMs.

Most people want to hear people's thoughts. Who knew?

AI doesn't have thoughts. AI is garbage that comes from other people's input. AI isn't able to create anything meaningful on its own because it's not a flesh-and-blood living being. It's just a regurgitation of input. The fact that some of you still don't get this says everything.

Anyone who knows anything about creativity will tell you that creativity comes from a lack of input, not more input or even combined input. This is why AI will never be creative. It simply finds the shortest route to anything - that's all it can do.

Is that creative? No.

Star Wars was the shortest route to what? It wasn't the shortest route to anything.

In fact, most movie experts thought Star Wars would be a failure and that George Lucas didn't know how to write a story at the time. People forget that Star Wars' success shocked the experts. If modern AI had existed then, it would have predicted that Star Wars would be a failure because the heavily weighed input would have said this too.

Input isn't creativity. As I've cautioned, imagination is a pre-cursor to creativity and the more you use AI (similar to the more people used search engines in the past), the more you limit your ability to imagine. It's peculiar that some of you don't realize this.

The good news for the 1% of you still reading this and understand what it actually means is that you are 10 years ahead of most people. You know that people want to connect with other people, not AI. You realize that AI can be useful in some situations, but it will be costly over the long run when people expect to be communicating with people. If you use AI to speak with people, you'll lose over time - and big. But if you keep AI in its rightful place, you'll win - and big.

People are not interested in what your AI says. We're interested in what you think. You can have 10 typos in your writing. We don't care. Your writing is your thoughts - mistakes and all - and that's far more interesting to us than any AI post that formats and types everything perfectly.

Good Uses of Artificial Intelligence vs Bad Uses

I'm not writing this to say that all uses of AI tools are bad. In fact, I'm writing just the opposite because, like a calculator, AI is a tool that can be extremely useful. You can use some of these tools to improve what you do already.

For instance - and I am actually shocked that no one has thought of this yet - if you're a recruiting firm, then negotiate with an LLM provider to get an analysis on the queries users request from LLMs. From here, you can actually identify who's using LLMs to learn (the whats, the whys and hows of improvement) versus the people replacing critical thought - this latter group you want to avoid hiring.

The former group of people: gold! Who's using these tools to increase their productivity and skill while enhancing their thinking? Some of these LLM tools have these data and this is extremely valuable to companies that want exceptional talent.

What you just read though involves someone who's using AI to improve themselves (understanding the what, how and why of things) versus someone who is thinking only of first order problems so that they can move amusement (which actually means to "not think").

The same with something like writing.

You write a story, but you dislike how many "is", "was", "were", etc exist in your story. You ask an LLM tool for help on what visual words help enhance the story.

It's still your story, but the LLM is helping you like a dictionary/thesaurus combination to enhance your story for your readers. The difference is that the LLM can do this faster than you looking up words in a thesaurus or dictionary.

But you're still thinking about how you tell the story, how you organize the events, and what happened in the story - all of these are extremely good skills to practice (especially, organizing your material). Like the above example, this writer is using an LLM as a tool to assist, not as a replacement. There's a huge difference and if I wanted to hire writers, these are the writers that I would be seeking out for writing.

And one sign that the LLM has helped you over time: you need it less. Think about what I just wrote. One sign that AI is doing its job for you is that you depend on it less, not more. Again, where do you hear this? Nowhere.

Another example with using AI to help you with understanding concepts in school: a particular concept is hard for you to understand. Ask for more practice and get the steps on how to think through the problem. In mathematics, inversions always made sense to me (subtraction, division, square roots). The opposites were harder, like multiplication. Now, you can use these tools to help you connect the dots better and think through these problems.

Again, you're not replacing critical thought. You're practicing improving where you may be weak or where you may be able to leverage an existing skill.

These are golden uses of AI tools.

Using AI tools to pretend to be you on social media so you can beach time is not a golden use. Sure, it may feel good to you, but in the long run as people realize you're social media is just a bot, they'll recognize your lack of presence (plus what that fundamentally says about you as a person). Will many people try to do this over time? Yes. Will many people succeed in the short run? Of course. But life isn't a one-time game; you play it over and over again and we all get better at weighing real versus fake, even if it takes time to recognize.

Challenge: An AI Project To Complete

Some of you may have noticed that browsers are frequently pushing updates. Have you read the latest terms and what they're doing with the information that you read and how you're browsing?

If you do, then congratulations because you're major steps ahead of Medium, Substack and other content providers who are rapidly falling behind what browsers are doing with information.

But you can't just know here.

It's time to apply.

Use an AI tool to build your own browser. Don't build an exact replica, but build a browser that has the features that you want. Do you just want to read? Then build that. Do you just want to watch videos? Then build that. Take time to consider what you want, then use an AI tool to make your desired browser a reality.

You will learn more about AI with this exercise than any video, article, image, etc on in the internet. You'll also now have a tool that you can use, if you're willing to take the risk since any tool you make may come with poor security (if you don't know all the security nuance). This latter point is less of an issue if you're simply using a browser to read text on the internet with no images.

(Note: this is meant as a fun project. The second you start to think about selling anything you make with AI, you invite a significant amount of complexity because you now have to protect the tool against malicious actors. You may be okay with these risks for yourself, but others may not be okay with these risks, and you're inviting that complexity.)

Once you finish with this exercise, reflect over the experience. Would you do it again? What moat do you think software companies actually have now that you've tried this project? What are people not considering with all this?

Your answers will mean more after you experience a project and see the result.

The Eventual On Premise Push

Once companies realize how valuable their data is - the ones that actually take care of their data, they will realize how important fully owning their data is. You can say goodbye to LLM or SaaS tools as I know that many of you have not read those agreements.

I'm seeing early roots of this with some key companies.

They want full ownership of their data and processes (along with only hiring people who will not use LLMs plus will be as secretive as you'd expect in some security roles). And for the record, hiring people who know how to keep their mouth shut has always been the rarest talent, but also high paying talent.

Why?

If an LLM or AI tool can replace entire businesses or make it easier for people to compete against you, some executives are starting to wonder how protected their niche is. I'll be blunt: if you don't fully own your data, your niche will be gone in a few years. Most of you will downvote this into oblivion because you can't handle the actual truth that you don't have a moat if you can't protect your data. That's an uncomfortable truth that 99% of people can't handle right now.

Once these executives connect the dots that these LLMs and other AI tools have learned this/picked this up from the data they've shared, you'll see that cease fast.

In addition, developing SaaS tools can happen fast. You can replicate the needed features without all the bells and whistles that come at a much higher cost. You'll see more of this over time, especially with tools where the pricing makes no sense when it can be replaced at a fraction of the cost plus full ownership of the data.

That last part is key; most leaders at a companies don't realize this yet. But once they do, you'll see some of the smart companies shift back on premise.

But before everyone feels excited about the learn to code bro, it won't be the same. For one, you'll be expected to do more and faster. Two, you'll have to have actual on premise skills - skills which some of you have never learned because you're cloud people. Third, you'll have to know some security to keep the data safe. In a nutshell, expectations of what you will do will rise, not fall.

But we're not at this point and this is why I'm seeing hundreds of resumes per job, plus a lack of recognition where we are.

Some Other General Cautions
  • Every AI tool will only be as good as its data. Worst, these tools are disincentivizing good data which foreshadows some problems for people over dependent on them.

  • Many AI tools are not energy efficient compared to already existing technology in many cases. Be careful about using these AI tools where a more efficient tool already exists.

  • AI is only a tool and should be used like a tool. You don't use a hammer to bead weld and you don't use a chainsaw to clear a drain.

  • When you replace your junior talent with AI, then you castrate your company's ability to have people grow into positions. The best data you'll ever have on people is from hiring them and seeing how they work. Word-of-mouth and references cannot top firsthand experience.

  • Every use of an AI tool (or more) sets into motion unintended effects; this may not always matter depending on the use. But it also may have an effect you didn't intend.

  • A person who contributes a little content, but communicates as a person and in their own voice over time does much better than a person who uses a tool to write for them. Your flaws are what make you, you. We like those because it means that you're a real person. And we all like communicating with people who are real, not some fake AI that pretends to have stories that it's never lived.

  • I like people. People's thoughts. People's experiences. People's flaws. But remember, that liking people means that you're okay with their incorrect grammar and spelling; their disagreements with your thoughts; their misunderstandings that can take longer than you thought to help assist; and even their mistakes. If you don't like those things about people, then you fail to see them in yourself because all of those apply to us as well.

These are points worth considering over time and are best understood with experience.

The written content of this post is copyright SqlinSix; all rights reserved. None of the written content in this post may be used in any artificial intelligence.

The Actual State of AI Engineering In 2026

Research Assistance

When two men agree, then one man is unnecessary. Due to extreme legal bureaucracy, we may not service some industries within the United States of America or countries within the European Union for research or data needs. SqlinSix requires that you include your jurisdiction and industry in the below form.