36

Ask HN: Have we screwed ourselves as software engineers?

 1 year ago
source link: https://news.ycombinator.com/item?id=31259206
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Ask HN: Have we screwed ourselves as software engineers?

Ask HN: Have we screwed ourselves as software engineers?
248 points by tejinderss 3 hours ago | hide | past | favorite | 231 comments
I cannot help but wonder, where is our software industry heading? There are overly complicated solutions to simple problems, huge push for moving to fancy stacks just for the sake of moving. Distributed systems? Kubernetes? Rust for CRUD apps? Blockchain, NoSql, crypto, micro-frontends and the list goes on and on. Its gone too extreme to the point where no one is exempt from these things anymore. Couple of years ago, I thought, its fine as long as I am not involved in this complexity, I can turn a blind eye towards it. But now, this unnecessary complexity has seeped in my day job as well. Managers start talking about "micro services", "writing" kubernetes operators in Go, moving away from python (because its too "slow"); someone recently gave a talk in my company, how to make a 500 line python script (which heavily involves in-efficient handling of IO) go faster with Rust. Someone else talks about that we need to move our poly repos into mono-repo because that where the leaders of the industry are moving to. Even recruiters started asking questions like "have u looked at modern languages like Go?"

I cannot help but wonder, that we have possibly screwed ourselves pretty bad, and there is no escape from it. The vocal minority tries to push these overly complex solutions down everyone's throats, and management loves this, because it creates "work" for the sake of it, but it doesn't add any real business value.

What are your thoughts on this? Will industry move towards simple solutions after experiencing this churn down the line or are we doomed forever?

The way I look at it is: there are more tools in the toolbox than ever before. Which makes our judgement (the thing they really pay us for) even more important. Kubernetes, for example, is a specific solution to a specific problem. The solution is complex but so is the problem. If k8s give you the right trade-offs for your situation, then it’s not busy work.

Of course, there are plenty of project where judgement Is thrown out the window in favor of adding a buzzword to everyone’s resume. I’ve heard it called “promotion-based architecture”, as in you pick the technology most likely to get you promoted. (If that works, it says all sorts of not great things about your organization).

Regardless, I don’t think the availability of tools is the root problem. It’s and industry-wide lack of emphasis on identifying and understanding the problem first.

s.gif
Promotion based architecture is self fulfilling prophecy at least in BI/ data world.

I see everybody around me moving to cloud, without really good explanation why. Only reasonable thing I can see as an pattern is that cloud experience on top of data things gets paid 30% more. It made me consider cloud a lot.

I was considering switching to cloud, just so I can put in my CV "experience with migration to cloud".

For next person commenting that it makes sense: it doesn't with 200gb database and super predictable workload, growth and usage.

s.gif
>For next person commenting that it makes sense: it doesn't with 200gb database and super predictable workload, growth and usage.

Depends on the company. I've been working for marketing agencies for the last 15 years and they're generally staffed by, at most, 1 IT person who is in charge of a third-party vendor relationship that offers managed IT services. Those IT resources (internal or vendor) don't specialize in data and often don't know how to deal with it well, predictable workloads or not, and often offer up solutions which are not appropriate (cost or otherwise) whereas there are managed solutions from cloud vendors that do. BigQuery, for example, handles compute and storage for you and rolls it into one reasonable query price. No need to worry about any management of anything there and just sit Data Studio (included) or any other BI tool (Tableau, etc) on top and you're good to go.

I get your skepticism and welcome it, but you're being a little rough. We're cloud native at my current company (I did a partial cloud migration at my last which was completed after I left) and it makes my life leading a Data Engineering and Data Science team MUCH easier without upfront hardware/software costs or long-term contracts, which were STAGGERING and left us with much more difficult to maintain, upgrade, etc hardware that took up most of my job as opposed to almost none of it today.

YMMV.

s.gif
Partially depends but company with 1 IT person that is not data focused, there is low probability that they will have that amount of data.

For anything above what we have I think it weirdly depends on country/ salaries.

In US, 100k per year is no brainer for cloud as even 1 FTE would cost much more.

In non-western Europe, 100k is deal killer as you can hire 2 senior DBAs and you still have enough money for quite a reasonable server.

s.gif
With extensive experience in this vertical, (marketing agency and analytics), 200GB is a _partial_ day for one datasource, let alone table or total DB size.
s.gif
It makes sense when you aren't a datacenter hosting company as your core business. Once you're done with your DRP and BCP it's highly unlikely that your server-under-the-desk (as it usually is) is worth the risk.

By the way, moving 'to the cloud' never means a specific thing because people have made words (intentionally?) vague to the point where you have to explicitly specify all the factors you take into account with your work in order to figure out which 'cloud' they had in mind.

Running a static workload doesn't require elasticity, but a 'cloud' isn't just elasticity. If you want "a program on a server with some storage that is always there" without having to deal with hardware, software, networking, storage, backup, maintenance, upgrades, service contracts etc. then an arbitrary virtual machine where someone else takes care of the rest makes total sense.

s.gif
And its easy to compare the cost of the cloud running your hardware to the cost of physical hardware. However, its much more difficult to compare the indirect costs between the two, and that's where I think many people go sideways.
s.gif
There's a finance angle behind cloud stuff too that's irrestible for the bean counters: cloud stuff is operations expense, on-prem is a capital expense. Unfortunately these folks are heavily incentivized to favor OpEx

I'm not a bean counter, all I know is those guys at my last job would rattle off about it like zombies. IIRC its a tax thing

s.gif
Opex is fully deductible each year the expense occurs, wheras capex requires amortized depreciation over the life of the purchased item. Makes it harder to calculate taxes.
s.gif
Wrong. People has been leasing on-prem hardware for decades before the cloud existed.
s.gif
There are non-IT perspectives on this as well.

My company offers two deployment scenarios: host it yourself and cloud hosted (SaaS). Many of our customers choose the latter because their internal IT systems require more process, and they just want to get something up and running.

s.gif
Just to add, at my last company it could take up to 6 months to get a VM provisioned on prem. We could provision what we needed in Azure on demand.
s.gif
> Only reasonable thing I can see as an pattern is that cloud experience on top of data things gets paid 30% more.

You say that like a pay rise of 30% is not a good enough reason all by itself for many people.

> For next person commenting that it makes sense: it doesn't with 200gb database and super predictable workload, growth and usage.

Perhaps not for technical or business financial reasons, but those are not the only possible reasons someone might do something. As mentioned before it can make a lot of sense to migrate to cloud if it means you can get a 30% pay rise.

s.gif
I see everybody around me moving to cloud, without really good explanation why.

People just buy into the "cloud" marketing. They don't have the ability to think and reason, and so don't understand that "cloud" just means "renting someone else's computer."

I built a complex in-house medical system. Quick, reliable, and liked by the users.

I was in the middle of adding a major new feature when all of the management in the IT department quit. The new people immediately decided that the whole thing had to be "in the cloud." I was removed from the project, and they hired three people full-time to rebuild it.

That was three years ago. The new system is still not online. The users are still using my old system, and the feature I was working on never got added because my presence and input was not welcome because I "don't understand the cloud." So I got moved to other projects.

People talk about "the cloud" with the same fervor and language as members of a cult. And you will be an outcast if you dare challenge their way of thinking.

s.gif
Given what you explained, it seems to me that the cloud was an excuse given by the new overlords to just get you out off the way. Cloud is fashionable now, but any excuse would work for them. This seems to be a power grab.
s.gif
"promotion-based architecture" aka "CV-driven architecture" :)
s.gif
the auld mortgage driven development strategy
s.gif
> It’s and industry-wide lack of emphasis on identifying and understanding the problem first.

Or just no real incentives of the people involved to do so. As a dev, I don't get any real credit for biz outcomes.

You are experiencing what I call "The Bisquick Problem". Bisquick is basically flour with some other stuff, like salt, premixed into it, and sold in a box in the USA. So instead of just buying flour and salt, you buy them together, which makes some things easier (like making pancakes), but it complicates literally everything else. You can't use it as flour, or as salt.

With software, the problem is even greater. You can use react, for example, but you will probably start with create-react-app, which adds a lot of things. Or you could start with Next.js, which adds a lot of things. You could use Java, but you will probably start with Spring Boot, or Dropwizard, which adds a LOT of things. Plus all of these starting points imply the use of new languages, configurations, and programs, in particular, builds.

In my view, all of these "Bisquicks" represent experiments-in-progress, with the ultimate goal of the systematic characterization of "software application", in general. In other words, they are systems of alchemy, pushing toward being a system of chemistry, which we don't have yet. So it is bound to be a confusing time, just as it was surely confusing to play with chemicals before Lavoisier systematized the practice.

I think every single field has complexified throughout history. Agriculture now has dozens of chemicals and large machinery. Woodworking now has advanced machinery and a handful of ways to do the same cuts. Software is no different, humans invent more and more tools in each field over time.

I don't think engineers are willingly screwing themselves. Does anyone here choose to adopt something they know will screw them over? Yes, we may be forced into this decision by higher-ups or by colleagues or associates, but those people generally have some reason behind their actions.

The field as a whole, none of us can control where it all goes. If your org sticks with proven older tech, it will do zero to prevent new frameworks from cropping up everywhere. If you adopt any newer technology, you're now becoming a user, increasing its relevance, helping to test it and prove it, finding bugs and errors.

So no, "we" have not "screwed ourselves". It's simply human nature to complexify and add more tools over time.

I generally agree, but two things I'd like to point out.

If you're using python for the web you're already part of the complexity problem, atleast from the perspective of someone deploying php 15 years ago. I use python for web development, and I love it, but deploying webpages used to be copy an apache config and ftp/scp your files to some location. Now we need app servers and reverse proxies, and static files are served differently, and even though I've gotten used to it over the last decade it does't mean it's good.

The other thing is that MonoRepos are pushing back against complexity for the sake of complexity. Why create a new repo when a directory will work just fine? I think a ton of people got repo crazy because their coporate jenkins server only allowed one project per repo, but it is trivial to check the latest git hash for a directory and base your deployment on that. ...I have a project I inherited that has 14 repos for a webpage with maybe 5 forms on it. I've mostly handed it off at this point, but everytime I have to look at it I end up preaching about monorepos for weeks.

s.gif
> If you're using python for the web you're already part of the complexity problem, atleast from the perspective of someone deploying php 15 years ago.

The problem with deploying PHP is that it immediately gives you something for very little effort, but the effort scales incredibly disproportionately once you outgrow your need for the bare functioning minimum.

I personally prefer the "modern" approach of dropping a single, statically compiled binary, which exposes a listening HTTP socket. In case of Go or Rust+Actix, this could sit directly on port 80, but as soon as you add HTTPS, ACME, virtual hosts, etc the story is basically the same regardless of language/framework choice, PHP included.

s.gif
Oh I agree, I still get chills when I think about troubleshooting PHP.

I was really just making the point that the effort to deploy python as a webapp these days would have been considered overly complex to the average developer 15 years ago. Just how some of the current stuff seems to the OP, so maybe not all seemingly complex stuff is bad.

s.gif
The debugging story around PHP is better than other major scripting runtimes like Node. With PHP and kcachegrind you can really understand where you're spending CPU easily. The tools for Node work but just aren't there yet. Another nice combo is Java + Mission Control.
s.gif
Regardless of how you're deploying things, having unrelated projects in the same git repository might be simpler (maybe?) but certainly seems worse at the same time.
s.gif
Regardless of how you're deploying things, having unrelated projects in the same git repository might be simpler (maybe?) but certainly seems worse at the same time.

Sure, if they're actually unrelated, or being managed by separate teams then split it up. Though I think the default should be to have one, and split it when there is an actual reason to, especially if it's for the same project.

The example I gave wasn't an exaggeration, 14 repos for one project. It was originally built and managed by one person who had them all in the same directory and would open them all in his IDE at once and basically worked on it as a mono repo. After that person quit, when others needed to figure out which repo to clone to fix things it was a nightmare.

I know monorepos can be extreme like google, but I just mean one per team, or at least one per project. You shouldn't have to worry about versions of your libraries when there is only one project using those libraries.

Edit:

For example, a project I did the layout for has a python backend, with a spa frontend, and some ansible playbooks for deployment, integration tests, and a few libraries only used by this project.

Each of those 5 things has a top level directory and we deploy a test server for every branch, plus most of us run a local server. We never have to worry about versioning between our own projects, because if they're in the same branch that's the version to use. If we split it into separate repos, then everytime we added a new field to the api, and needed to update the frontend and tests, we would have to manually specify which versions of all the repos go together to build a test server, or to even run a local dev server.

s.gif
i'm wondering what engineering decisions drove the project to be split in 14 parts?

is it like micro-services thing or what?

s.gif
Yes it was mostly micro services, with a separate frontend repo, and a separate repo for deploy scripts, some separate libraries that were shared between the microservices, there may have even been a separate repo for documentation, I don't remember exactly.

I think it really just came down to dividing the repos into chunks that the deployment scripts could use/trigger off of. Instead of developing in a way that makes sense for developers and bending the deployment to fit. Since it was all in one directory on his computer, it was basically a mono repo from his perspective. Committing from the IDE just committed to whichever repo was changed, and since he was the only developer he never saw the downside. When I had to take it over ghorg[0] really came in handy. It's a script to clone all repos from a user/organization on gitlab, github, and others. Then once I opened up all the repos as one pycharm project I was able to get some stuff done, but at that point I might as well just had one repo with a separate directory for each.

[0]https://github.com/gabrie30/ghorg

s.gif
Why does it seem worse? This is perhaps just each of our individual biases and values hiding behind a preference but I can't identify an objective reason why it's worse other than Jenkins or devtool of choice not handling it out of the box.

Don't get me wrong. CI/CD and other dev tools not handling monorepos well is a totally reasonable objective reason to not use monrepos. But it's also mostly about the tool not the monorepo concept itself.

s.gif
In the last year, there has been a concerted push by certain influential engineers to split our mono repo up. This was first done by splitting things into two repos that were supposedly independent. But they really weren't. Naturally, there was code and configuration that we wanted to be common between the two repos. So now, the solution to every one of these problems is to break off more code its own repo. As the saying goes, as soon as you have two objects, soon you will want a third. It has become a complete nightmare to work with, and as far as I've seen so far has had zero tangible benefits.
s.gif
> If you're using python for the web you're already part of the complexity problem, atleast from the perspective of someone deploying php 15 years ago. [...] deploying webpages used to be copy an apache config and ftp/scp your files to some location.

There is a MASSIVE difference in the value proposition of a proper deployment from version control and using something like docker w/ docker-compose to facilitate running your project locally (which is not present in your PHP example) versus what the OP is talking about, which is the idea that you should run EVERYTHING on Kubernetes and write Rust for CRUD apps.

Unneeded complexity is the greatest enemy of the software engineer.

Unnecessarily complicated is the default. Choose the elegant thing wherever possible (its not always possible, but often is).

Actively avoid complexity, or it will shackle you.

Sounds to me like you've worked at trendy tech companies and want to keep working at trendy tech companies. That means you're going to have to work with trendy technology.

A massive amount of companies, maybe even the majority, don't do this. They use what works and upgrade when needed, not when its cool to use the new thing. They just don't tend to pay like trendy companies and don't look as good on a resume as trendy companies.

If you believe that's all there is to those ideas, maybe you need to step away and think about them for a while. Sure, there's going to be some resume padding happening in larger orgs. But all those ideas solve real problems too.

I think you're just in a very negative space if you start with "Distributed systems" as something overly complicated. At some scale getting a bigger machine either doesn't make financial sense or is just not possible to implement efficiently. Some ideas are taken too far or implemented where not needed. But I'd rather recommend you to learn where each one of them started and why. Criticize for valid reasons, but don't become a curmudgeon.

s.gif
>> If you believe that's all there is to those ideas, maybe you need to step away and think about them for a while. Sure, there's going to be some resume padding happening in larger orgs. But all those ideas solve real problems too.

They do solve real problems. The question is whether or not they solve the problem at hand, and if they create other issues in doing so.

I was re-decking a back yard bridge with a friend and he brought a framing hammer. I'd never used one before and I always had a finishing hammer and didn't know the difference. I learned a new tool and even ran out and bought my own for the project, which worked fantastically well. It's still in my toolbox and hasn't been used since. You just don't use that thing to hang pictures on the wall because it may well f-- up the wall a bit. Using the right tool for a job is way more important than using a particular tool for any other reason.

s.gif
I’ve never liked this best-tool-for-the-job mental model with software engineering. A given project has multiple needs, and unlike more physical tools there is a very high marginal cost for each incremental tool you use. So there is a huge balance between many well suited tools or a few but more generalized, less fit tools. This isn’t to say there’s an obvious place where to strike the balance, but the “best tool for the job” metaphor undermines recognizing it, and I’ve found this balance to be at the core of good tool picking for a given project.
s.gif
> But all those ideas solve real problems too.

All of them, except for blockchain. That one can go die on the trash heap of history.

s.gif
Access to the dollar is a real problem. Stablecoins help resolve that.

I am not a crypto stan but it has at least one usage.

s.gif
Well, it's a good tool for money laundering and purchasing drugs.
s.gif
That's not fair, it makes a great Ponzi scheme, too.
s.gif
> That one can go die on the trash heap of history.

As long as there are Rust jobs or blockchain related jobs there, you're going to have to cope and wait for a very very long time until that is ever going to 'die on the trash heap of history' to happen.

s.gif
I'm not sure you realize just how quickly that can all go away (and does). Blockchain companies are a dime a dozen. They don't all last forever.
s.gif
> I'm not sure you realize just how quickly that can all go away (and does).

So why hasn't it died quicker than it should have years ago as many have incorrectly predicted then?

> Blockchain companies are a dime a dozen. They don't all last forever.

And who said that the 'companies' did last forever? Why do you think I said as long as?

I'm just wondering if the whole thing is guaranteed to totally go away 100% and to be absolutely used by no-one since clearly someone also thinks so. That is my question.

s.gif
What’s crazy is the scale a modern computer can operate at. There _are_ problems that need more scale than that but they are the minority. Meanwhile it’s out of fashion to spend time improving application performance and instead people go horizontal early, with devastating complexity issues.
I don't disagree with you and your examples are definitely over-engineering / busy work. In my experience a lot of it is driven by the desire for young engineers to learn a new language. If someone paid me to move something to Rust, I would do it. I heard good things about Rust and I would love to get paid to learn it.

But has being a software engineer become easier or harder over the last 30, 20, 10, 5 years? I wasn't an engineer for that long but my impression is that programming today is a lot easier. Dev tools, compilers and linters are very good. There's also a lot more community documentation on stack overflow. Some of the complexity is hidden from the developer, which is good and bad. It can bite you in the ass later, but in 95% of cases its a good trade off in my experience. For instance, my preferred stack is Serverless on AWS. I can set up a single config and have cloud storage, an api, a database, logging, auth all permissioned and in a file I can check in. And with a generous free tier, it's pretty much free. I'll admit if something goes wrong its not fun to debug, but it's remarkably fast and simple for me to spin up a CRUD api.

s.gif
No, it definitely has got worse, I've been doing this 15 years, the sweet spot was the Rails revolution. Before that a lot of frameworks were a bit too much magic, and not enough understanding of how browsers, http and html worked.

Simple MVC stacks went to all languages, jQuery front-ends doing enough but not a lot. JavaScript enhanced easy to reason about server-side stacks.

You used to spend a couple of days a year, yes YEAR, mucking around with tooling. Now it wouldn't be too much of an exaggeration to say you spend a day or so a week fighting some part of your stack because it's been over-engineered.

IDEs can't keep up so you have to run inscrutable command line tools that fail if you have deviated even slightly from whatever error-prone rain-dance some moron claiming 'best practice' has forced into the build process.

Programming used to be about writing code to solve business problems. The shift to DevOps has been a massive productivity drain and most stacks are now incredibly brittle.

The worse part has been debugging, which you touch upon. Native calls, simple call stacks, easy error logging. All gone.

Moving everything into hard to debug http calls has been a disastrous productivity sink.

The irony has been that as languages have got significantly better our productivity has actually dropped massively because of the ridiculous amounts of over-engineering in "modern" code bases.

I recently worked on a project with 2 devs that took 3 months with a modern stack. The prototype in a standard MVC stack I'd made to demo to the client took 2 days.

It's utterly ridiculous and sometimes I feel like the boy in the story about the emperor with no clothes.

s.gif
> The worse part has been debugging, which you touch upon. Native calls, simple call stacks, easy error logging. All gone.

> Moving everything into hard to debug http calls has been a disastrous productivity sink.

There are a lot of good things coming out of the latest big experiments but this has been a major blow. I have worked on software where the intended debugging approach was to write some code, manually push it out to a shared dev environment, read CloudWatch logs for debugging. It is by far the worst way to debug code that I have ever seen. Things that would take me minutes to debug in a normal setup can take hours or days. Projects like LocalStack aim to improve this a little bit but it's completely counter to the ethos of many "cloud-first" developers.

s.gif
People still do deploy production ready systems using RoR, Django/Python or whatever "sweet spot" framework you want to mention. Some run quite successful businesses.

You can't generalise from your experience over the last few months.

> Programming used to be about writing code to solve business problems. The shift to DevOps has been a massive productivity drain and most stacks are now incredibly brittle.

Some businesses _have_ to "shift to DevOps" in order to operate at the scale and resilience required.

Some businesses have unnecessarily migrated to over engineered infrastructure because monkey see monkey do.

Saying "everything is ruined" completely misses the dynamic.

As an earlier poster explained there are a lot more tools in the box now. Making the right choice requires experience, a good understanding of the problem to be solved and discipline in implementation.

Get it wrong one way and you end up with an over-engineered mess that takes forever to get work done with.

Get it wrong another way and you end up overwhelmed by traffic, unable to scale in response and forever fighting fires.

s.gif
God I am sick of these apologetics every time someone expresses skepticism.

> Get it wrong another way and you end up overwhelmed by traffic, unable to scale in response and forever fighting fires.

To nitpick this specifically, over my 12-year career toiling over this stuff there has never been a scenario where this has required a radical rework to solve. Boring-ass B2B shit rarely requires that level of engineering and, at least in my case, the workloads were fairly predictable and increased in a linear fashion. The one time I did accidentally end of DDoSing ourselves, I temporarily stood up nginx instead of our aging Apache install and was able to serve enough requests to fix the problem. (We then transitioned our app to run in nginx)

It was one fire, and it took a little bit of brainpower to fix. Then the DO droplets were humming along perfectly, and last I heard they continue to do so to this day.

The operational aspects of this double-digit-millions-per-year business ran on a postgres database that compressed to 6gb.

The next business I worked for did billions per year in business value and, until HQ mandated migrating everything to GCP, was humming along perfectly fine on Heroku for a monthly spend well under five digits. Ironically, I think they initially wanted us to be on-prem but couldn't support our stack and would have left all of that to our devops guys. GCP was the compromise (oof!).

s.gif
I was looking for a new job recently. Almost every single job advert listed "microservices".

So yes, it is affecting our entire industry. Every aspect of it.

There are very small number of organisations that actually have any sort of need of a microservice architecture.

Worse still, actually talk to these orgs, they'll say they actually have a "hybrid" microservice architecture. Which is basically the worst of both worlds, all the pains of managing microservices, without any of the benefits you get in a normally built application (derisively called 'monolith' with all the negative connotations that word has). Half your calls disappear into the black hole of HTTP calls. No pressing F12 on a method call and going straight to the code. No easy stepping through code in the debugger. No simple download the code and just press play and it all works.

I like solving business domain problems. Not tooling problems. Tooling problems are incredibly boring and frustrating to me. To a certain type of programmer, rather than actually doing their actual job, they absolutely love introducing tooling problems as busy work. Because the actual business domain problems don't interest them. Then switch role as they've got the new hotness on their CV before they have to maintain the craziness they've introduced to the code stack.

Case in point on a project I helped get over the line recently. I joined 1.5 years already done on the project, development has slowed to a crawl. Lead architect designed a system of DDD, event-sourcing, message queues, microservices. Just to add a new field I had to edit 10 files. To add a new form I had a PR which edited/added 40 different files. How it actually worked completely flummoxed juniors + mid-level devs, it was beyond them.

All for a 10 page form that would have at most 150,000 uniques in one month per year. Roughly 1 request per second, assuming a ten hour day and 1 request per form page. Child's play.

A standard stack would have easily handled that load, probably even on a VM. A dedicated server would never have gone over 10% CPU. It would have been massively easier to develop, and cost 1/10th in dev time.

At one point I had a quick go at re-writing a section without the trendiness. I switched the crazy event-sourcing for a simple, easy to understand, service. Over a thousand lines removed, 100 added. Absolute nuts.

Millions wasted on trendy architecture. Of course the architect left after a year for greener pastures.

s.gif
Your comment seems to imply that computer programming == web development. Would you say your comments also apply to embedded, mobile, games, data science/ML and scientific computing?
s.gif
Serious question - if it was so much better then, why are approximately zero new companies building a Laravel/Rails/whatever app and using jQuery for the front end? If it's that much of an advantage I would expect at least someone who is trying that (because surely some are) to succeed with their lean, mean tech stack. Why wouldn't you have just written the project in that standard MVC stack instead of a modern one?

You won't get any objection from me on the debugging point, it's much harder, especially when you're crossing environment - e.g. running the front end locally but maybe hitting a remote dev or QA backend. I will point out though that there are logging tools that support the pretty standard practice of having correlation/transaction/trace IDs on your requests, such that you put in a GUID from an error and it shows you the entire request and anything that request spawned.

s.gif
The majority are simply going to follow popular opinion regardless of the merits, and developer efficiency is often not that important. I also think the efficiency gains are bigger for smaller and inexperienced teams.

Also, people are getting used to app-like experiences and designers are designing for it. Building an app-like experience is more natural as a Single Page Application, which basically means taking on the modern frontend stack. There are places that push against this, but to do so requires buy-in to the engineering side over product and design. Even then, the engineering side has to be knowledgeable enough to not follow popular opinion and come to the determination that Laravel/Rails/Django is actually the right tool, which isn't always the case.

s.gif
Yea people absolutely still build companies with these tools. If you just want to start a Saas company as a side project you would do yourself a disservice if you didn't use something like Rails, Django or Laravel.

The problem is in larger companies, developers stopped caring about just solving the business problems and moved on to solving non existent technical issues to build resumes to go the next job to get more salary. They go to company to company like a parasitic infection leaving them to rot by introducing Kubernetes React, Go microservices with Rust cli tools.

Also it might be an issue of not being fulfilled in life outside of work.

s.gif
Indie makers famously do just that. Pieter Levels is making millions with PHP and jQuery.

I consult primarily in node and react with all sorts of transpilers mess and shitty packers. The stuff of nightmares - we waste so much time making things work - but simplifying the stack will never get traction among the not-anymore-technical principal engineers or even amongst the other developers who need a fancy cv for their next gig.

My side businesses use python, django, jQuery, old node.js without modules, rust, svelte.

Engineers hired in big companies want to work on shiny technologies and build their cv.

s.gif
> Pieter Levels is making millions with PHP and jQuery

no, the reason Pieter Levels makes millions is not PHP and jQuery, but because he's also a brilliant sales/marketing/business person

s.gif
> I recently worked on a project with 2 devs that took 3 months with a modern stack. The prototype in a standard MVC stack I'd made to demo to the client took 2 days.

Can you expand on the modern stack and that standard stack? Is the "standard stack" you propose easier to work with now than it was 15 years ago? I guess the "standard stack" is just out of fashion?

My preferred stack (aws serverless BE, nextjs front end or just aws app sync) keeps me in one language (typescript), with some stuff like GraphQL that you have to know. But the tooling around that helps keep my errors in compile time for the most part.

s.gif
In that case it was a simple .Net core MVC stack, Vs an angular/Web API stack.

For some inscrutable reason you can no longer do a simple 3 page form with a results page without making a SPA these days without getting someone claiming you're doing it wrong, it's nuts.

Worse still, the angular app has all sorts of weird bugs. Auto-complete somehow screws with the validation of inputs, back button doesn't work properly as you lose all your data, URLs dont work, etc., etc., etc. There's also all sorts of craziness like the whole admin frontend is bundled with the client-facing part as the front-end developer didn't know how to split them up.

Utterly preposterous.

The problem is often that very skilled Devs give advice about incredibly specific stacks that only someone of their abnormally high-skill level can maintain.

Throw in a few juniors or a few mediocre developers and the whole project turns into a complete and utter mess.

And don't get me started on your stack that's mainly inappropriate for most applications.

GraphQL. Talk about the next NoSQL fiasco in the making. Perhaps you missed the whole saga of everyone doing new development in NoSQL and then a few years later we had the flurry of blog posts about "Why are we losing data? I didn't realize ACID was so important..."

s.gif
Is being in one language really that much of a benefit? I've worked on several large production apps with node backends and I'd rather use almost any other backend language.

And I take issue with saying you "have to know" GraphQL, that's a pretty specialized tool for pretty specific problems. Most things are not graph structures.

s.gif
Maybe I'm biased because I've been using node.js for 10+ years but node.js has a decent API.

Simple, slightly opinionated. You just need to be careful of dependencies and what kool kids are doing these days. GraphQL is a complete nightmare.

If you don't transpile anything and are conservative with your dependency you're golden.

s.gif
I’ve been working in software engineering for 30+ years so I can say that yes, things are definitely much easier. Debugger in the 80s/90s were finicky beasts that were shrouded in esoterica and as a result, it was usually much easier to try to debug code by adding print statements than it was to actually use a debugger. I’m still somewhat amazed by the capabilities of contemporary debuggers.

Libraries outside those provided by the OS/compiler tended to be hard to come by. Certainly the universe of freely available library code that we have now was nonexistent (I’d argue that CPAN was a big factor in the spread of Perl in the 90s—well, that an the assumption that was widespread back then that CGI scripts had to be written in Perl).

As a community, we’ve collectively learned a lot of important lessons as well. Legacy systems like Perl and LaTeX tend to install common code in a single universal directory for everyone rather than having this be application/document specific as became the case with Java and the repositories will only give you the latest version of an artifact¹ which has tended to lead to stagnation since backwards compatibility becomes non-negotiable, although some lessons haven’t stuck (like the fact that Rust’s crates.io only has a single-level namespace for artifacts).

1. Not sure if this is absolutely the case with CPAN. CTAN, which was quite possibly the first centralized repo² does not manage multiple versions of artifacts.

2. I remember when it was first being set up, since there was no guarantee that any tooling beyond TeX would be available to consumers, there being TeX scripts to do things like encode/decode binary files into emailable ASCII format. The original CTAN FTP server in the UK also had the remarkable feature that it would generate directory.zip for any directory that you requested.

s.gif
Some things are easier, but then we've made it so quick we become the lone typist implementing the vision of the all-important UX consultant.
Still, part of the industry is moving towards simple solutions.

A refreshing experience was a mobile app Apple device, with Swift and Swift UI. It was a real joy, works as expected, produces concise code, small files, live preview and reasonably fast build time. Sure, it's closed environment, but last time I felt so productive doing UI dates back to Visual Basic.

Counter-example: a simple web app, nothing fancy, and my node_modules filled with around 500MB of files, hundreds of declarations of injected things everywhere.

But nobody forces us to use Kubernetes, nobody forces us to climb the Rust learning curve, nobody forces us to use this multi-platform framework that solves all the problems of the universe.

I try to stick to standard solutions, oft proposed by the vendor: Kotlin on Android, Swift on Apple, c# on Windows. Server code: stick to Java, or try the simple Golang (another refreshing language).

Also, I try to stay late to adopt tech, just starting to be confident in Docker and will see in a few years if Kubernetes could be useful.

But, an architect needs complex solutions to justify its job, a team lead needs as many dev as possible to tell at the next family dinner, and the new dev wants to try this new fancy tech to put on his resume. So they are all fine with that. Just don't tell the company ownership.

Developers/engineers/programmers do this because we crave complexity. Then throw in a dash of elite-ism/gatekeeping, a sprinkle of CS trivia driven hiring (aka leetcode interviews) and you will find these behaviors. Organizations may fear losing top people because other organizations, maybe brand-new startups, use shiny new tech to solve same-old-problems so it looks appealing. Its a shiny-new-tech arms race with a JS framework proliferation (I couldn't help myself to not take a jab at JS, sorry).

Here's a walk down memory lane for you about rewriting apps, circa year 2000: https://www.joelonsoftware.com/2000/04/06/things-you-should-... If you replace names and versions of things with "Go", "Rust", etc. it is pretty much what you describe.

It's another way to favor incumbents with huge resources: it's an anti-competitive psyop to raise barriers to entry and stymie startups in endless shiny thing chasing, because the incumbents are scared of the disruption, so those big players that move a bit slower, seek ways to slow down the scrappy players that can move fast, while those "fast movers" get drunk on the koolaid of buzzwords manufactured on blogs from those incumbents, and trends like being agile, microservices, etc, while willingly injecting themselves with massive technical debt from overcomplicated frameworks, and nobody's allowed to question it otherwise it's civil flame war.

Even if you don't believe it's a conspiracy, you have to admit that: that dynamic favors incumbents, the big frameworks often come from the big incumbents, and if you are a big incumbent even if you weren't manufacturing this conspiracy but you saw its possibility I mean why not take advantage of it?

The outlier is the scrappy indie developer like Peter levels who runs his multi-million dollar a year business on his own basically on a single machine using PHP and only recently started using git. That may be an extreme example but it paints a picture of what radical effectiveness and efficiency looks like and it's vastly different to the Kool-Aid but don't mention it otherwise the mob will come for you.

May the no-code & indie scene save us all. Amen

s.gif
Did you learn the word 'incumbent' yesterday?
The ecosystem is flooded with over-engineered bespoke solutions, and is sorely lacking in standardized approaches and best practices. It's a core reason why software development cannot reasonably be called engineering, as engineering principles are either never applied or discarded on a whim.

It won't change until we can form a guild (professional association) and turn it into a bonafide profession. Right now, code that one developer creates may be unrecognizable by another developer, even though both are working in the same domain. It would be a disaster if one lawyer could not follow a brief written by another or a doctor could not decipher which techniques were used by another to perform a particular surgical procedure.

"Just because you can drive a car with your feet if you wanted to, doesn't make it a good fucking idea!" --Chris Rock.

Two thoughts on this:

1. This industry absolutely has, and has had for a long time, a problem with "oooooh, shiny!" chasing. We collectively obsess over using the latest and greatest, newest and shiniest, "sexy" technologies of the day. And sometimes (often?) this obsession overrides good judgment and we try to use this stuff regardless of whether or not it's actually a better fit than something older and more prosaic.

2. However, sometimes the "new, shiny" is actually better, for at least certain applications. And we should always be willing to use a newer, better, "sexier" tool IF it actually helps to solve a real problem in a better way.

Unfortunately (1) often seems to trump (2) and we get stuck using the "newest and shiniest" for no particularly good reason, other than the simple fact that it is the "new shiny".

I have no expectation that this trend will ever abate.

> management loves [overly complex solutions], because it creates "work" for the sake of it, but it doesn't add any real business value.

Don't managers understand that development is a constrained resource? They have to choose which projects move forward, where people are assigned, and increasingly, which outsourced service to use because they don't have enough in-house resources to turn to.

My cynical view of the move to complexity is management (or their C-level superiors) are often sold on new platforms or "standards" that require it.

I see it as a tech debt bubble cycle and almost inevitable in any industry/field. It will always ebb and flow.

On one hand it make work unnecessarily complicated and in some cases creates political problems because the more complicated a solution is the more governance, and the more governance, the more politics. A lot of these solutions get put in place because the people promoting them are incentivized to be the one who found the "solution".

On the other hand it creates new opportunities for those who have the courage to not accept other people's assumptions about what "good" is and to find out for themselves and separate the wheat from the chaff. If you can adeptly use Occam's razor to decide for yourself what works and what doesn't you'll be ahead of the curve. Just keep calm and code on.

I'll take the devil's advocate:

* All these new tools, they give us options, no? Use the right tool for the job, the ability to switch if something becomes old/unmaintained.

* Is this actual complexity or perceived complexity given your experience? The node ecosystem looked very complex for me (someone coming from Python) until I actually got into it. Now it seems pretty run-of-the-mill.

* Is k8s really all that hard? Build a container and you don't have to worry about provisioning it and deploying it again.

There may be good reasons to use some of the technologies you pointed out. And that's a strong may because I can easily come up with arguments in the other direction in addition to yours. I say all this to mean you just shouldn't dismiss it because it seems hard. It may be and it may not be, and if it is it may still be worth your time if the payoff is great enough. There you have to do the legwork to figure that out.

s.gif
> Is k8s really all that hard? Build a container and you don't have to worry about provisioning it and deploying it again.

No, the deployment of a container to Kubernetes isn't hard. And it better not be, that's supposed to be the advantage.

What's hard is literally everything else about it. And that may be a fine tradeoff if you are at the scale to need it and have a team to manage it. But there are many organizations where that tradeoff does not make sense.

Agreed.

You should know that in the past when OOP was not common, we had to work a little harder doing things like managing our own memory or building a LAMP server to publish our web pages.

There was a thriving market for language and UI add ons. The result was that each company had their own internal dev tools and recruiting people outside of the company who had experience with those tools was nearly impossible.

All that said, we were at a point where entry to programming was easy (think Visual Basic in the 90s). The quality generally went down as everyone was pushing their "first project" as if it was a polished product. Finding actual good programs on PC is close to the situation on mobile where most of the apps are trash.

s.gif
OOP to me would be classified as the industry blindly moving towards unnecessary complexity. It is indeed the definitive example of how the industry over-engineers things.

While still prevalent today, there's a huge sentiment against this paradigm. Modern languages and frameworks such as React, Golang and Rust show how the tide is turning against this.

s.gif
Three object oriented languages prove how there's sentiment against OOP?

Or are you saying that Rust/Golang/React are simplistic in a world of over complexity. React I would generally agree with, the other two not really.

s.gif
Very true, but unlike backend languages you can generally get away without knowing much about the internals. Unless you're making dev tools of course.
I think that as developers, we need to resist these trends and go with working stuff. But that's difficult, because for every developer that will go "Java is fine", there will be a dozen, usually younger developers, who are hyped up to use whatever is cool at this point in time.

But this is where the senior developers and architects should come in; they need to make a firm stand. Make things like https://boringtechnology.club/ mandatory reading. Have people that want decision power write ADRs so they are made to explain the context, the problem, the possible solutions, the tradeoffs, and the cost of picking technology X to solve problem Y.

It's too easy to bung in new technology or change things around; the short term, initial cost is low and the long term cost is not visible, because it quickly becomes "this is how things are done around here". Make it more expensive to change things.

And make people responsible for these things. If an individual advocates for technology X, they have to train and hire for technology X as well, and be responsible for it long-term. Learn to recognize a "magpie developer", the type that will always go for the latest shiny; you can use those to introduce new technology, maybe, but keep them away from your core business because they will make a change that impacts your product and your hiring, without taking long term responsibility for it.

anyway, random thoughts.

s.gif
I'm a proponent of the boring technology school of thought, but it's not a great look when the images on that site don't load. Apparently laY loading can now be done with a single img attribute. And those flat color slides should be pngs.
My advice is : work with more senior people. It seems to me that people with 10/15+ years of experience will judge this hype train more severely than younger ones.

The dangerous spot is engineers with 5-10 years of xp who have become good enough at writing huge piles of unecessary code and have them work.

Highly depends on where you work. In my company we stick with "use boring stuff" and have a limited amount of "innovation spending". I look at the complexity of these other things mostly as a computer science theory and ways that language and solving problems could be done.
Are you kidding? It has always been this way in one form or another.

It’s a peculiar feature of human nature that we want to make things more complicated than they need to be. The more something relies upon a combination of our skills, and the more esoteric those skills, the more insulated that thing is from outside influence, ownership, and control.

My bet is the frustration you feel is less about complexity and more about your inability to affect change. You’re just one of many competing solutions to the same set of problems, and people will think your ideas are just as complicated because they’re not their ideas. They understand their own ideas better than they understand yours. Vice versa.

And we all live under this umbrella, together. I think that’s why the biggest asset you have as an engineer is to influence people who make decisions. Unfortunately, the best way to influence them is to convince them you have important, complicated knowledge they don’t. Self reinforcing loop.

One hypothesis (but not a single answer) is that complexity creates jobs. Engineering something complex and clever creates job security and consulting hours. Fads, trends, and ideas come and go like the tide, and it makes the big wheel go around. Kubernetes, which 99% of developers have no use case for, is indeed “job security” for what could equally be achieving achieve with Unix and a few she’ll scripts. (I’m being deliberately provocative here).
s.gif
This is an important point. Looking at open job reqs can give the impression that X is very successful, when the reality is that any organization using X has an explosion in how many developers they need, and also a high burnout rate in the ones they have. Plenty of other developers quietly and more-or-less-happily working with non-X, and you never see their job in the job boards.
>There are overly complicated solutions to simple problems

You sure about that? Sometimes the seemingly simple problems are quite complicated. Partly because we are building software for a world that is fraught with (security) landmines.

But point taken, sometimes you can overcomplicate the architecture.

>Distributed systems? Kubernetes? Rust for CRUD apps? Blockchain, NoSql, crypto, micro-frontends and the list goes on and on.

Each of those are particular tools for particular problems (though I'm not sure why Rust for CRUD apps is so terrible).

>moving away from python (because its too "slow");

Not only is it slow, but the lack of compiler support for typing leads to an inordinate amount of (stupid) runtime problems. I say this because I recently inherited an entire inventory of python software built up over the years at my current employer. Right now, I have a bug backlog full of runtime blow-ups (dating back years) because of careless typing. Coming from the unsexy world of C# and Java, still trying to see why Python would ever be used for anything but scripting and (maybe) prototyping - it's slow as molasses and no compiler support.

I believe this issue was discussed about 7 years ago. I believe this article still holds true.

https://pingineering.tumblr.com/post/116038532184/learn-to-s...

(let me know if someone has a better link.)

s.gif
I’m deeply saddened that they chose MySQL as their hill to die on. I understand why, but boring technology doesn’t have to corrupt your data silently by default.

As for better links, I’m sure the concept of “choose boring technology” evangelises and explains the broader point that your article makes: https://boringtechnology.club/

Just say no. I know this isn't easy. I'm the tech lead at my company and I've continuously steered us away from stuff because I couldn't understand why we needed to do things differently and no one else could come up with a reasonable argument for why anything of the things that we had that worked needed changing. I have dabbled with all the things but not found a compelling reason to change anything. Sometimes people go off on a tangent and sometimes they discover something useful but I'm in no rush to get there. What we have is fine and we have stuff to do.
I agree that we lean on things that are often too complex for our given tasks (the industry encourages it) but I'm going to push back with a little Rust rant. I'd argue that RUST is actually represents a simplification not a complication. Perhaps GC was an over-complication. Perhaps app/system language separation was an over-complication. Perhaps the idea that memory/cpu limits don't matter because there is always more on the way because hardware keeps improving royally screwed us. Perhaps OOP is a disastrous over-complication. Perhaps energy costs and efficient resource management is actually a hell of a lot more important than we believed it was. Look at all the bloat we blindly accept in the name of productivity which in many cases is dubious. Rust may not be the be all end all of languages but it does shine a bright light on the brain rot that has consumed the software industry.
It's always been complex, and getting more complex, it is likely that you are just becoming aware of the complexity, when you thought it was simple. There's also the phenomenon of companies using tools/techniques that others are using without understanding why others are using them. I work for a company that needs k8s, kafka, and distributed systems. The previous company I worked for did not need that, at all, and so they didn't. The company before that didn't need them either, but thought they did and tried to move their single, relatively simple ETL pipeline to k8s, and it was a disaster.

But companies that need those tools, really, really need them. We don't use "nosql" (hate that term, it is a really dumb term anymore, sql use or not is completely orthogonal to the problem, "non-relational" or "non-OLTP" is better) databases because we think they are cool tech, we use them because traditional, relational, OLTP databases don't work for our use cases. But if someone comes to me and asks what database they should use, I always say "postgres", unless they can present a compelling reason postgres won't work.

The problems we face are tremendously complex, though, and only getting more complex. We fight back with tools, but there are years where the tools are failing to keep up.

s.gif
agreed on postgres being the default choice

postgres delivers 200% on top of what you need and if you still aren't satisfied, there are countless forks and extensions

Making technology decisions based on what others are doing is not engineering, it's lazy. Solutions should be analyzed and determined based on the problem not out of FOMO.
Excellent question. Esp. the final bit where you ask about 'the business value'. It is actually closer to just 'value' - as experienced by the users.

If you are in a position where you see piling up complexity does not bring in more satisfied users and more money that is a great time to set up a simpler competitor that will do things on the cheap in a less complex way.

I have a thought experiment: if these tools are indeed not adding business value, and organizations are becoming "unnecessarily complex", then it should be easy to undermine their position in the market with a product that chooses the tools that you deem to be simple, right?
Sounds like you're getting caught up in the hype. You don't have to use any of those tools or technologies to do your job, and indeed the vast majority of people working in software don't.

The only people who are screwed are the people who follow hype. The rest of us are just fine.

Most blithely, no, software "engineers" are making out like bandits.

Recently, one of Alan Kay's talking points has been that "software engineering is an oxymoron", and I couldn't agree more. What he means by this is that, instead of the principled approach to design and development characteristic of other engineering disciplines, software people do little more than what amounts to tinkering. Partly this blame lies in the shift to agile methodologies, adapted whole heartedly with little understanding of what the old style process was doing. Projects, moving incrementally, are stuck in local maxima in the name of "product-market fit".

That's the demand side of things; you've described the supply side pretty well. Developers like dealing with problems, so they naturally and unconsciously seek out more complexity. If you look at how even mediocre developers can make >200K easily now, it's not hard to see how that's a massive problem for everyone. All this complexity, especially from getting the various separately developed components to work together, gatekeeps the profession and business of making software. I'm at one of the companies that doesn't spend the most to hire, or have the shiniest perks, and let me tell you, we're desperate to get anyone we can get. This is unsustainable, and I worry we need to solve it before AI takes the means of programming out of our hands.

So, what is to be done? There are plenty of examples of software that gave the non-programming masses a means to build. Spreadsheets like Excel are by far the most popular, and have driven corporate computer adoption since VisiCalc came out in 1979. When they were simple, scripting languages like PHP and Perl could be handled by a non-engineer, as long as the admin side was handled. But I think the most interesting cases are those of full, contained programming and authoring environments, like SmallTalk and HyperCard. By being the entire system, they could cut out all the accidental complexity brought on by these interfacing components, and instead let users focus on building software. Importantly, they don't deal with the concept of files for code - instead it lives alongside whatever object it's relevant to. For better or for worse, object-oriented code is easier to reason about and empathize with. The more imperative code gets, the more the programmer is forced to play computer, which I think is the determining factor in gatekeeping programming today. The way forward is having the computer explain itself, be visible, and unsurprising, which modern stacks seem to be moving away from.

Software Engineers are generally intelligent people.

Intelligent animals need stimulation or they get bored and depressed.

I think collectively, "let's move to Rust" is at least partially because we're not challenged enough by writing the same CRUD app for the 20th time in the same language we've been using for the last 5-10 years, and we want to leave our mark in a new ecosystem by implementing whatever is missing.

Some people want to optimise for "fun/exciting/different" while others seem to be aiming for "known/just works, incidentally boring".

We probably need to find the right middle; how do we keep it fun and challenging while keeping it simple and maintainable.

> Will industry move towards simple solutions after experiencing this churn down the line or are we doomed forever?

That depends on the individual developer. For example, I'm working to clean up the mess that has become app dev w/ JavaScript (https://github.com/cheatcode/joystick), but I expect many will dismiss it short-term because it's not "what everybody else is doing" (despite being far simpler and clearer than the state-of-the-art).

And therein lies the problem: groupthink. There are very few people asking "how do we simplify this" or "how can we make this more clear" and a whole lot of people trying to impress each other with their galaxy brain knowledge of unnecessary tech.

The good news is that it's not a technical problem, but a cultural one. People are afraid to think independently and so they just go along with whatever the "best and brightest" say to do (which is usually an incentivized position due to existing relationships/opportunities).

You landed on the surprising root cause:

“Doesn’t add business value”

But do you know how the business makes money (the actual processes)? Can anyone tell you how to add value in concrete terms?

Because in over a decade of consulting on technical leadership, Agile, lean and DevOps, the most consistent issue I’ve seen is that those questions are unanswerable for almost anyone in almost any company.

In the absence of a clear path to value creation, everyone optimizes locally for “best practices” because…

the root problem is almost all decisions have to be explained to people who know next to nothing about your area & you need to still sound rational.

The local maximum for that usually is “this is how _____ does it & it’s the new trend now.”

I laugh when these "brave new age" practices find their way to academia, where they actively interfere with every step of the intentionally perpetually-unfinished, half-assed, single-use software that they make.

But why are s/w developers worried? As long as tons of advertising money find their way into glorified blogs, they will get paid, no matter how much complexity they invent to justify their workload.

Yes, but not in the way you describe. Software engineers have power right now, we should be unionizing (even if the union is only pushing for things like IP clauses and non competes to be less draconic). Build the union while we are strong so it's there when we are weaker.

A union doesn't have to be a huge monstrosity. It can be simple and fight for a few basic standards in the industry.

s.gif
The purpose of a union is to create a labor cartel which tends to standardize the price of labor above the rate at which the market would likely set it. IP clauses and non-competes are a small part of the issues plaguing our industry. Putting constraints on the labor supply is probably not a good thing; it leads to the market for labor relocating to less union-friendly climes. Ask anyone from Detroit how well that worked out.
s.gif
Software engineering doesn't rely on factories, and less labor friendly areas produce worse products (ask Boeing about that.)

Apple and Google aren't going to just shut down in California because a union asked them to give some concessions that materially improve engineers lives.

s.gif
> The purpose of a union is to create a labor cartel

No, that is not the "purpose of a union"

Some unions, especially in the USA, have functioned in such a way, but this is far from universal.

s.gif
'Labor market' is a purely American concept. You shouldn't be switching jobs constantly and relocating. It takes a toll on personal life and makes no sense anyway.
s.gif
It's still a "market" so long as there are supply and demand, regardless of how frequently a given participant is conducting transactions in that market.

The housing market is probably a good example. Many participants probably only purchase property once or twice in their lives (less even if you consider the case of a married couple buying a house, that's 0.5 purchases per person).

s.gif
Unless you live in a communist country, you are part of a labor market.
s.gif
Among many offers, I chose a company on absolutes, not relatives. The culture is good, and I want to make serious products that serve a purpose. I get paid less than my peers but enough for a living. Is that communistic?
s.gif
You are simply pricing in external variables to your compensation, which is part of how markets work.
s.gif
I get what you're saying but I wouldn't program web apps for 3x the pay even. I don't "feel" the American way of switching jobs and being a general purpose programmer.
s.gif
Too late for that now. With Remote Working becoming mainstream, you'll need to coordinate workers globally to create a union.
For me it's a miracle each time my PC boots. We often forget the sheer marvel of modern computers and need to appreciate what we have. Remember Steve Jobs on stage showcasing how you could send an email with an iPhone. Back then it was amazing, but now it's common and we're all jaded about it. We need to recapture the joy of computing, not build large overarching abstractions.
s.gif
The iPhone is very pretty but the software was not technically impressive. I have much older handheld PCs that could send emails and can still do more than the current iPhone.
s.gif
Can you do augmented reality on your handheld PC? How about edit HD video?
Yes, but it has very little to do with any of the things you mentioned. Software became so prevalent that it became a mantra, "Software is eating the world" but people in IT have little to no choice in what's put on the plate. They've been convinced that positions of responsibility and authority are bad and should be left to the MBA's. Eschewed most industry groups that provide some semblance of protection that nearly every other professional organization has adopted. Doctors, lawyers, actuaries, accountants, all have professional organizations that are powerful and provide some protections. I believe that most of the pathology that you see is a result of a push and pull between management and IT as people in IT seek those protections in other ways.

Just one example, Scala. First, I'm not criticizing the language itself. It has it's place, but what I saw was programmers trying to create a protected space that would provide higher bill rates. Java was everywhere and hiring a Java developer was easy. Scala was new and had steep enough learning curve that you could drastically shrink the candidate pool while at the same time selling the shiny new toy to management. They could create complex, arcane code that kept new developers from getting up to speed while providing the excuse that they were inferior developers and weren't smart enough to keep up. It didn't work for very long as management caught on that they weren't getting much other than higher labor costs. Go seems to be the latest incarnation of that while Rust is a bridge too far to sell to management.

So it's this back and forth, provide something to management that they can sell to their superiors something new. Management buys into it as long as they can get promoted before it inevitably blows up and the developers who sold it move on to new projects, rinse and repeat.

s.gif
> They've been convinced that positions of responsibility and authority are bad and should be left to the MBA's.

Meh. From my experience, many developers actively don't want to go into management, because usually your whole day is filled with management crap and you can't go and actually code any more. And developers who do switch to management often end up as miserable bosses because their bosses don't care about "leadership trainings".

Additionally, many companies have the non-management track end at senior level, which means zero career progression for those who do not wish to transition to management.

s.gif
>> your whole day is filled with management crap and you can't go and actually code any more.

These are clearly the wrong people to be pushing into management. Good management (it does exist) includes people who have coded, but are willing to give that up to enable others to do that. They get satisfaction from being enablers and making space for their underlings to be creative and make decisions.

Further, there are many excellent managers who don't have a typical developer background, but can recognize what success means for their team within an organization and how to achieve it. I've been managed by many excellent managers with backgrounds in chemical engineering and the classics.

The developers you describe should decline these positions and find a better fit where they can make better use of their time. Choosing to accept positions like this hurts them, as well as others.

I'm also jaded from all the new frameworks and "paradigms". (Similar to how every exploit must have a catchy name nowadays.) However, I genuinely love the innovation and ingenuity of software engineering. The industry will find simple solutions, but not in the way you think: the language and mental models will advance, making what now seems complex into a simple thing.

Phone calls are stupid complex nowadays compared to the old point-to-point wiring, but we can still very easily "pick up the phone and dial." It's an abstraction/mental model that's held since PBXs became automated.

When I studied machine learning 20 years ago, it was barely used, and everything was "from basics." The applied stuff was very simple, like an auto-encoder. Today, the way you think about, and teach, ML is not "a matrix here and a vector there," but in combinations of ANN layers.

I think there's a huge amount of 'boring' software development going on that doesn't touch any of kind of stuff. Java and C++ still run a lot of things.

I'm still suspicious of Guava, let alone Rust.

I think the problem are not the fads and hype trains (those have always existed and will exist - remember "network computing", "netpc", "thin client", "i-anything", ".com", etc?).

The problem is:

a) Inexperienced developers that confuse jumping on hype with "modern" and sound engineering, especially when the project is not something to be deployed and forgotten about but something that will need to be maintained for a decade or more (will your Kubernetes or blockchain be still around in 10+ years?).

b) Clueless managers that allow it to happen (or, worse, actively push it)

c) Spineless hucksters that would sell you the Moon as long as they get their provision.

Neither is the fault of the technology or the engineers who have created it.

Heck, I have recently witnessed a representative of a company manufacturing mining excavators (this type of equipment: https://daemar.com/wp-content/uploads/2018/12/dreamstime_m_8... - company is not Daemar, though) giving a breathless talk about how they "innovate in metaverse" by giving their customers the opportunity to buy NFTs of the pictures of their excavators. Seriously, not making that one up ...

That's just general lack of common sense, general lack of understanding of who your market is and what your customers are actually asking for (hint, NFT it probably isn't unless you are in the business of yet another crypto Ponzi scheme) combined with FOMO.

And the company management either gets it - and tamps down on it or the company will go out of business at some point.

This is not really about software - all of those things have their places and can have great benefits when used in the right way for the right purpose (not because it is trendy, modern or because the competition is doing it too) and by people who actually understand them (and the consequences of deploying them).

We lean into complexity because it's easier. Simple solutions are actually much more difficult to create.

Software engineers are lazy, don't want responsibility, just want to have fun and be creative. That's not a recipe for good engineering. The industry will continue to chase its tail as long as we don't treat it like a real engineering discipline.

> Someone else talks about that we need to [...] X [...] because that's where the leaders of the industry are moving to

This is the definition of cargo cult, and companies in our industry have a higher than average tendency to behave this way.

Most technology, principles, methodologies, programming patterns, project management patterns etc etc, are subjective, as in they work well for certain projects... not all. Even the massive over complexity we see, for example in containers, are sometimes worth it, they have their place. The issues come when people start behaving as you have found by copying what others are doing because they make the over simplistic relationship between their chosen tools and their success as a business.

Either convince your peers, or even superiors that mimicry is a poor basis for technological choices (best argued by doing the analysis yourself and pointing out the real world applicability), or find a different company that understands this (they do exist).

The new data tools I've seen are complex under the hood, but offer elegant user experiences, giving the best of both worlds.

You referenced a 500 line Python script being refactored with Rust and make me think of the Polars project: https://github.com/pola-rs/polars

Polars uses Rust to make DataFrame operations lightning fast. But you don't need to use Rust to use Polars. Just use the Polars Python API and you have an elegant way to scale on a single machine and perform analyses way faster.

I'm working on Dask and our end goal is the same. We want to provide users with syntax they're familiar with to scale their analyses locally & to clusters in the cloud. We also want to provide flexibility so users can provide highly custom analyses. Highly custom analyses are complex by nature, so these aren't "easy codebases" by any means, but Dask Futures / Dask Delayed makes the distributed cluster multiprocessing part a lot easier.

Anyways, I've just seen the data industry moving towards better & better tools. Delta Lake abstracting all the complications of maintaining all the complications of plain vanilla Parquet lakes is another example of the amazing tooling. Now the analyses and models... those seem to be getting more complicated.

I think the problem is that the majority of architects copy/paste solution for big companies problems (Google twitter ...)but 99.99% of business have not those problems, and when you tell a manager that this is used in google and it will cost the company 0$(open source) ,he will say ok do it
s.gif
The fact that a FAANG spent hundreds of man years on their bespoke solution, already implies that theirs is not a solution for small problems. Small problem solvers should not look at FAANG for solutions, but at other small businesses.

More often than not, small problems require small solutions...

We went from soap to nswag generated openapi clients. Full circle.
Eventually all things will be abandoned and/or rewritten. It will all be thrown away and then there will be more work ahead ($$$). I cringe thinking about the wasted life, though.
The question to ask is has software gotten better? I say yes, cloud, mobile and web have exploded and are if much higher quality than they were in the past.

So I wouldn't say we've screwed ourselves.

Would we be better of if we took a different path? No one can or will ever know

The complexity caused the variety, not the other way around. Networked systems are inherently complex. Most of the technologies you mention are attempts to solve that complexity in some way, and the ones that stuck ended up being ideal for specific use-cases but not others.

The industry trends towards the most useful solution, not the simplest one. React isn’t internally simple, but it killed the frontend JS framework experiments which used to come out daily because it really established a useful paradigm that covers a lot of the web GUI usecases.

The process is messy but it’s not illogical

You can go a very long way with terraform, html, JavaScript, and golang/Java/python/rust/whatever API language you prefer.

If these things aren't at 100% you're just adding to problems with more things, not solving them.

This question is getting a lot of attention. As such, it is consuming our time. I’d suggest it can be improved and revised. Currently, it is rather vague. I suspect the author could take more time and make it clearer. There are a lot of interesting themes — it deserves to be unpacked and clarified.
s.gif
True that, but perhaps asking the question and getting these answers is part of how that happens, and the end result is a longer-form blog post somewhere.
A lot of things are over-engineered, but this has always been true.

It is not necessarily fair to say that the majority of software engineering jobs actually require or involve the en vogue tools.

1. Just because tech stacks gain traction in headlines does not mean that they are truly mainstream, but rather that they are of significant interest to the community where the links are submitted/discussed.

2. Recruiters and job ads are written to target software engineers and are gamed towards this goal, dropping buzzwords left right and center, sometimes quite nonsensically. Front-end jobs quite frequently demand that you have experience with Angular, React and jQuery to work on something that turns out to be a Vue.js app, and so on. So this can also make certain tech stacks and frameworks appear more prevalent than in fact they are.

So, yes, there are lots of overly complicated tech stacks out there, but no I don't think anyone is screwed. Often those tech stacks will have been chosen to solve a specific business problem and then it's not overly complicated, it's appropriately complicated.

If anything, there's just more noise to filter out when selected a place to work. Lots of buzzwords and nonsensical jargon dropping, or indeed questionable decisions for the solution of a relatively simple business problem, are good indicators for places you at which you probably shouldn't work.

I agree with the general sentiment, but I would add that Go is a very simple language. It's probably the simplest language I have ever used (besides C).
what you’re referring to is called cargo-culting

small companies copy their technical and even hiring decisions from behemoths like Google

why? market powers!

the unfortunate reality, is that these companies can’t compete elsewhere, so they use hype technology that allows them to better market themselves (on the said conferences for example)

the employees can also use this opportunity to put “managed Kubernetes cluster” on their resumes to get more job offers

solution for you would be to find a company that doesn’t focus on technology, but on the problem itself

Before you categorise something as 'unncessary complexity' maybe it is worth taking some time to understand whether or not the problem that your company is trying to solve aligns with the goals of idea being presented.

We probably are doomed if there is no push back and debate with the vocal minority. Silence is often mistaken for complicity.

I think the market is always right. As complexity increases in areas as you describe, it will create opportunities for solutions that simplify things.

IMO, it's one of the reasons that Phoenix LiveView is so appealing for people because it removes so much complexity from building otherwise complex tooling.

I actually just had to come face to face with this because I've been developing a lesson plan to teach my son to program...and after looking at everything I settled on Linux command line + HTML/CSS + SQL. Then the decision came down to which language to teach and I narrowed the field to Ruby, PHP and Elixir.

Ended up settling on Elixir simply because of the functional style and total capabilities without having to introduces tons of additional technologies.

When my last company grafted yet another version of React onto our aging Rails app (making it the THIRD such front-end framework present in that codebase) to make a SPA loan application form, where the only externally fancy thing it does is real-time vaildation, I knew the realm of ridiculousness was long past us -- instead, we were deep behind enemy lines in the zone of absurdity
I personally feel like it was somehow worse in the days of enterprise Java BS. Check out the daily WTF and it doesn't really seem to have substantially changed over the decades: some folks in the industry will have great success with some technique that happens to work for their particular case, others will repeat, some successfully, some not, a myth grows from the successes that people do hear about, there's intense FOMO (what is _your_ micro service strategy), and at the end of it there's "cargo cult technical strategy" from people with little understanding of the circumstances in which something is applicable but try to get to success by applying successful people's techniques regardless of circumstances.
The problem isn't with the tools you list. In my experience, management is looking for a silver bullet to solve all their problems or to use something as a marketing term. It seems many non-tech companies are chasing the tech that actual tech companies use even if their use cases don't justify it.
The reality is all the tools you mentioned solve certain problems really well.

Need container orchestration? K8s is the best on the planet far and away

Need accelerated compute? Rust is a fantastic language that saves us from c++

These tools are all fantastic and we should be very grateful we have them. If people are using them outside their use cases then that’s just bad engineering.

This is why i moved to data science so i can focus more on solving problems than picking frameworks and libraries. We are not completely immune to this problem, but by and large the tooling ecosystem is much smaller and the focus is on problem solving and not the tech stack.
s.gif
Definitely not immune but far from perfect. I was talking to folks at PyCon about this problem. There’s definitely “framework fatigue” in data engineering.

Luigi, Airflow, Argo, Prefect, Dagster, bash + cron, MLFlow. Pandas, Dask, Spark, Fugue, etc.

No. It's fine. Idk how long you've been in the industry but the Bad Idea Graveyard is already a mile high. It's great to see innovation and competition and it necessarily will include some duds or some things that inexplicably succeed. I've seen a lot of orgs experiment and then back off a lot of these kinds of things. Sometimes they end up getting strong adoption. There's plenty of smart ways to manage it. The industry keeps growing and evolving like crazy and the overall trajectory has been nothing but positive.
Just because Ruby and Python are great doesn't mean that they ought to be used for every project. I love being able to write code that compiles to a single statically-linked executable in Rust that pushed me to write more correct code from the start. I also appreciate what Erlang/Elixir offer in terms of fault tolerance, extensive pattern matching and functional programming. There are so many ways to solve problems and that's a good thing. Every one of these languages has tradeoffs, though. People don't move to fancy stacks just for the sake of moving. They're trying to solve old problems by creating new ones!
A question for the people here who have a favorable view of kubernetes: what is your level of experience with it, and what problems does it solve that aren't already solved by cloud managed services?
So, you have valid points, but...

1) some of these things (e.g. node, microservices) already peaked a few years back, being overapplied and now the pendulum is swinging the other way

2) others (e.g. Kubernetes, React, monorepo) were developed at large, profitable companies that others wish to emulate (or work at someday), so they find excuses to use them. This case takes longer to reach a point where things swing against it, because everyone wants to pretend their company is the size of FAANG or will be soon, but the same process of overapplication and backlash happens eventually

3) in the midst of all that noise, there are some new things which are in fact a good idea for most developers. I don't know Rust or Go, but perhaps they are examples of that.

The key for us as developers is, unless we wish to work at FAANG, try to spot (3) in the forest of (1) and (2), and don't let (justified) annoyance at (1) and (2) blind us to the fact that (3) is out there as well.

cars today are more complex than the model T, though we could have settled for faster horses ;)

I don't think we're screwed.

I agree at times complex solutions are prioritized for the wrong reasons - e.g to create more work, buzzwords, or to look nice for hiring and investors. But ultimately these are tools with tradeoffs.

I happen to like K8s, monorepo, and Go because they solve problems that I have personally run into. I think crypto goes too far and doesn't really solve anything.

In terms of complexity, I don't see these tools as going from algebra to calculus, but more like re-learning variations of algebra over and over - sure its tedious, but its not rocket science.

However if you don't like dumb industry trends that don't create business value you can always go work for a series A startup. They DGAF about the frills or buzzwords, they just want fast results.

I believe there's some level of real innovation in all the new trends, but I would say the majority is just recycling of old ideas implemented at a different layer (e.g. CommonLisp/Smalltalk vs. Java vs. Javascript, jails vs. docker, Thin clients vs. SPAs, ...) and hyped by some BigTech™.

The industry seems to be constantly spinning tires, putting a lot of effort in rediscovering mostly the same things every decade, while really hard problems remain unaddressed. That's clear when you see most important algorithms published before the 80's.

I was thinking about this yesterday as it relates to infrastructure and hosting systems, then I stumbled-via a semirelated article-the phrase "cloud repatriation".

https://deft.com/blog/cloud-repatriation-isnt-a-retreat-but-...

I think the tools may have become too good.

There are so many different ways to build web services and the hardware (CPU/GPU/RAM/network bandwidth) and the software (OS/Nginx/Python/PHP etc.) have become so good that at the end of the day, they all work, more or less, which means that such complexity can always be justified.

I feel like software written for embedded systems to work with physical world suffers less of these issues because the environment is just less forgiving.

Explicit tools are complex on the face - implicit tools are complex in implementation.

Kubernetes is complex and FTPing some rb files would be simpler: until one of about 145 different situations arises that kubernetes forced you to accord for ahead of time.

Whenever you find yourself complaining about the complexity of a tool: ask yourself “am I smarter than everyone in my industry, or do I possibly not understand the problem entirely?”

Just spin up a Rails app, pop on a Postgres DB, tailwind CSS, job done.

(Only half joking.)

Yes luckily FAANGchads like myself are helping the industry by constantly job hopping to max TC.
I think you're overreacting (and I think the comments here are overly negative).

Web tooling is better than ever. I can very quickly spin up a full-fledged production grade app with very little investment. I don't worry about blockchain or NoSQL or any of that. I just use tools that make me a productive engineer and that's ultimately what companies are interested in. If you're worried about recruiters asking you if you've looked at modern languages, then you've got some bigger fish to fry. If you don't know the language, the answer you should feel like giving is "I can learn anything, and I'd be happy to prep for the job."

I'm currently working on a statistics website for a game named Smite. The ingestion engine is powered by Go/Redis/PSQL/Docker, and the frontend is Next.js deployed on Render.

This is hardly complex. The Go binary reaches out to the Hirez API service, requests some data, caches it on Redis (in case we need to run the ingestion multiple times during development and to avoid service quotas), and then stores the data in a normalized data structure in Postgres. With Postgres I can now run SQL queries on top to gather stats about the playerbase and the games. All of this is done on my local machine. My MacBook has about 1 TB of hard disk space, which used to be unheard of a couple of years ago, so I have no worries about my database growing to a size I can't manage (old matches are also pruned and removed).

The next part is the frontend part, which is what I'm working on now. But this is also super simple. I'm using Next.js to statically render a website using SSG. I basically reach out to the Postgres database locally, grab the data points I need, render the UI into static HTML files, and then I just take that build, push it to Git and it triggers a job to deploy it on Render. All of this tooling is ridiculously and refreshingly simple.

I think you're really overthinking it.

s.gif
Even as a British person, I am not sure if this is sarcasm and a wonderful example of a satirical take or not.
Cloud pay-as-you-go may help balance this out. With an in-house cluster, the capital cost is paid, sunk. With a cloud deployment, there is a tangible impact of wasteful code right there on the balance sheet every month.
Yes, these things are done for IMPACT, personal impact to get that raise/promotion. Most of it totally unnecessary for the 99.999% cases. Same with leetcode questions. You start with leetcode at the gate and then continue with micro-fe refactor to get a raise.

Industry is infested with people who hate programming but love status.

Software ate the world, but didn't digest it quite properly, and now the world is in many ways broken.

I think some of the complexity stems from trying to make digital things that simply aren't or shouldn't be.

> I cannot help but wonder, that we have possibly screwed ourselves pretty bad, and there is no escape from it.

Just focus on making something great, and don't get too caught up in all the fashion. Software lasts way longer than people think. No one cares what brand and type of hammer a builder uses to make an amazing atrium. Likewise, no user once though, this video editor would be better if it was written in rust and ran in kubernetes.

I think the author is suffering from common problem these days... "I see one thing is broken, therefore it's all broken." Instead of taking this approach try to think how you can improve things instead of seeing what is wrong - see what can be fixed.
There's a big leap from "a few people at work are overengineering things" to "we have screwed ourselves as software engineers". I don't think that you can generalize to the entire industry based on your experience.
In general, people only move their career by pushing for new things which inevitably become increasingly complex overtime as all the low hanging fruit is gone.
Just let them create these complex monstrosities, eventually it will open up opportunity for simpler tools and systems that will eat them for breakfast.
I don’t think you’re wrong in your observation (though perhaps a bit hyperbolic in the doom) but I’m perplexed why you think there’s “no way out”.

Software is malleable, people are generally smart. It may take longer than you hope it does but things will shake out just fine as teams/companies are forced to look critically at their infra spend vs utilization and adjust accordingly.

All this thrashing & change is normal. There are many reasons, but here are a few:

- Trend-following (Mgmt. FOMO) is real

- Resume-driven development is real

- Sometimes the 'new' stuff is better

Agree with everything except the monorepo comment. The polyrepos I've experienced were more complicated than the monorepos
I think about it more from the perspective of "building stuff that is useful and interesting". I can very quickly build a lot of cool, useful stuff with JS + Node + React + Postgres.

Yeah there is a lot of overbuilding and BS in our industry, but I don't think we're unique in that regard. It is safe to block out the noise and focus on what excites you.

Not to mention shoehorning MongoDB into everything because "MySQL is slow".

This is at a startup that doesn't even have 100 concurrent users, and their data and queries are nothing special.

First software ate the world. Now software is eating software too. My point is, what you write makes sense if the target isn't moving. Writing software will inevitably become more complex because the envelope is always being pushed.

What you say about needless complexity is a very valid point, but it's just growing pains imo.

s.gif
I wonder if "growing pains" fully captures what's going on here. It might just be natural for people to grab more layers and tools when the run into problems. Everyone loves to demo how the new thing works with "just a simple YAML file."

Honestly everyone would be better off doing everything in code (python, bash, go, rust, c it doesn't matter) directly. They're easy to debug, flexible, and everyone already knows how to work with them.

s.gif
I suppose grabbing more layers and tools instead of thinking deeply about why you're having the problem and resolving it at its core is the growing pain I'm referring to.

On the other hand perhaps the industry is being ever-increasingly led by a younger workforce who already came into this ecosystem, and there's less chance of a full retro/introspective about why things are they way they are.

Lordy, yes. It's human nature:

https://xkcd.com/2347/

You are awesome and cool and raking in the kudos and bucks if you are piling yet more stuff (especially big, complex, and unstable stuff) on top.

You are a stupid nobody loser if you are the dutiful maintainer in Nebraska.

If the solutions are indeed overly complicated, then eventually natural selection will weed them out, assuming that eventually the supply of money funneled into these things becomes limited. There's a precedent for this in the way that mid-2000s startups eschewed the heavyweight J2EE and/or "object request broker" architectures for simpler HTTP calls.

But whether or not they're overly complicated, I think the reason why these things are grating is because they're less fun than coding up solutions to problems 15-20 years ago. Configuring containers is a pure exercise in versioning hell, and with the emergence of devops, it's impossible for developers to avoid.

I'm over here jobbing from home on my couch (#winning) thinking about how screwed it feels that my task for the past couple days has been to send transaction details to a "webhook" so that a template email can be sent to the customer for 3rd party compliance purposes.

Why the heck can't we trigger an email from our internals? Oh, we don't even host our own email... because we're using a different company to host ALL our emails, documents, filestorage, etc...

i'm_in_danger.gif

Wait until it breaks one day and no one can understand it enough to fix it !!!
My thoughts: quit working at tech companies. There are tons of small a medium sized companies that have business outside of the tech industry that need software development done. Many of them will choose to have it outsourced, but many of them don't. In the last 10 years of my 20-year career, only 1.5 of them were at a tech company (3 years ago), and they were definitely the worst.

I work at a foreign language instruction firm. I'm making a virtual reality training environment for them. It's the best job I've ever had. I don't have anyone micromanaging my work, because nobody understands my work. I barely understand their work, and that's ok. We understand that about each other and we actually collaborate.

In the last 3 years I've not once been yelled at, talked down to, berated, cajoled, pressured into working overtime, any of it. I've not seen it happen to anyone else, either. I have an office of my own. I can work from home whenever I want. People just trust me to be an adult and do my work and it's the greatest thing ever: basic human decency.

Also, folks on the business side feel like they're missing out on something if your solution is simple. When they meet their compatriots for dinner, they come back and ask me, "We don't use AWS?". They don't care that the bill is 1/10th and probably think you're inept for proposing a simple RDBMS-based solution instead of some monstrosity using NoSQL and micro-services :-)
Will industry move towards simple solutions after experiencing this churn down the line or are we doomed forever?

IIRC I found this site because of this essay by Paul Graham:

http://www.paulgraham.com/icad.html

TL;DR: don't worry about industry, what's the most efficient way of doing things? Do that.

It depends on what side of the coin you're on. If one is adept at learning these added layers of complexity and has the motivation to do so, then almost any added complexity works to one's favor in terms of job security. Yet if one so much as lacks the motivation then there's a chance you'll either lose out entirely on software engineering or you'll get burned out eventually. More and more people are ending up in the latter camp.

This principle of having an implicit fluid intelligence test as part of the barrier to entry might not be such a bad thing if there wasn't a perverse incentive there to institutionalize the complexity. There's a perverse incentive because there's effectively little meaningful standards that the industry holds itself to yet all the incentive to be an niche expert, not only because that's one of the few ways to have an edge when there's no other barrier to entry, but because of an overall lack of competence by management in knowing when technologies should and should not be used. I anticipate the usual backlash to my suggestion that the software industry has atrociously low standards for itself, yet I will suggest again that we are a ticking time bomb of security exploits waiting to go off, and someday we will be openly blamed.

We can't see the forest for the trees. We've invited the entire world to become software "engineers" and computer "scientists" and have gradually lowered our standards at the same time. How does an individual fight back against the hordes of software cargo-cultists? One strategy is to learn whatever confusing crap that The Google is using and convince management to allow them to implement it, hence making one's self the senior-most expert in X at the company and putting competing colleagues in their place. Does it matter if Kubernetes actually solves a real problem for a company? Of course not. It just needs hype, an esoteric name, and backing from The Google or some facet of Silicon Valley.

This is why the web is littered with crappy f------ web "apps" that don't render any content unless code is executed using a Turing complete language runtime in the browser.

This is why a bunch of "orchestration" commonly makes deploying the simplest of changes to be a headache requiring coordination between multiple team members.

This is why microservices are almost always implemented in situations where they are in no way called for.

This is why millions, if not billions of dollars are expended on applying blockchain to problems that don't actually call for it.

This is why Ruby and Python are "too slow" to run the Eldritch abominations we call "monoliths" that cost unwitting companies hundreds of thousands of dollars to keep from failing.

This is why so many software projects include a bunch of dependencies that aren't even used, or too many dependencies, or dependencies that shouldn't even be dependencies.

This is why everything seems to need preprocessing, postprocessing, precompiling, compiling, and transpiling.

In the near term, a substantial number of us will win by adapting to the perverse incentive of needless complexity. Long term, we'll all get screwed over when those with the actual money and power wise up to the distributed low-level scam we're pulling. All it will take is some cyber-attacks to highlight how careless we can be.

Some are going to see this as an attack on their tool of choice.

There's nothing wrong with Kubernetes. There's nothing wrong with Rust. There's nothing wrong with blockchain, or even microservices.

It's about the use of these tools. Goodness me.

We'd better start emphasizing and teaching practicality before it's too late.

I think things are more complex - overly complex. This is partly driven by rates and CV embellishing - its great for the individual to implement some new technology for their CV and for the better money. The complexity begets more complexity, and the cycle goes on...

I think the industry is waiting for AI to come through. They want the business analysts to be able to write their specs in English, and have the AI do the coding. In such a scenario lots of developers will lose out - some will still be needed - but from a business perspective, this will be even better than outsourcing.

Another manifestation of problems in higher education. It’s not just software.

If you read up on the sociology of professional specialization you’ll learn that most technical complexity in a field is there for competitive purposes. Jargon exists more exclude and obscure than to facilitate.

So one predicts less productivity as competition increases lead to complexification of professions. This is all because higher education is broken. One of the functions of higher education, perhaps it’s most important function, is allocating human capital efficiently. It’s fully derelict in this, preferring instead to sell credentials to labor that labor doesn’t need, at the expense of the debt holders and students, to the delight of corporations. The result is zero productivity going back to the early 70’s.

The pendulum always swings back the other way.

Perhaps after a downturn, things will revert to the mean.

s.gif
To what degree is this valid logic?

Under what contexts does “reversion to the mean” apply?

Over what time frame?

The “mean” implies one dimension. What quantity are you referring to?

s.gif
https://www.econlib.org/is-there-a-swing-of-the-pendulum/

> Is There a Swing of the Pendulum?

> By Pierre Lemieux

> People are often tempted to see social (including economic and political) phenomena in terms of a “swing of the pendulum.” In this perspective, problems such as wokism (just to give an example) will be corrected when the pendulum swings back. I suggest that this approach is easily misleading and seldom useful.

s.gif
From Wolfram Math World

> Reversion to the mean, also called regression to the mean, is the statistical phenomenon stating that the greater the deviation of a random variate from its mean, the greater the probability that the next measured variate will deviate less far. In other words, an extreme event is likely to be followed by a less extreme event.

It is a stretch (and invalid, generally) to extrapolate this well-defined phenomenon to a complex system. There are many complex systems that maintain or increase their complexity.

s.gif
> There are many complex systems that maintain or increase their complexity.

Increased complexity is only achieved with additional energy input. im suggesting that if energy (read: money) going into the system decreases rather than increases, there will be a reduction in complexity.

Why would there be no escape from it? Just do something else if you don't like the current state of affairs. No need to get all pearl-clutchy about it.
Complexity is bad when you are designing and maintaining a system. The ecosystem of humanity's software development isn't something you are designing or maintaining, here complexity is good because it provides abundance and diversity of tools and solutions. Don't need it? Don't use it. But stop advocating either as the right or good way for everyone.
This is just the Blub Paradox in a different form.
The reason for this is Resume Driven Development and promos for flashy projects. Management in tech companies is completely broken.
While alot of this is valid it isn't just a result of managers being enamored by conventions (though that is part of it)

In the cloud if you want to run a performant platform you typically can also run it much cheaper if you migrate away from maintaining actual systems.

The problem is that the DevOps and system engineering jobs have become much much much more complex in order to accomidate the cloud and as a side effect developers now have to meet them halfway as the line between the two blur.

If you want to run a product that processes a million records a minute you are likely going to want to go serverless and that means writing atomic lambda operations. We are shortly not going to live on a world where you can just do all this on your laptop which will be good in some ways and bad in others.

You will never have to worry about environments anymore you just write code against the aws,Google,or azure SDK and it will run on an obviscated identical system you are never aware of...which also has it's pros and cons.

You are right for most companies. Normal SaaS products need to get over themselves and realize kubernetes might not be that useful but this complexity exists because the larger companies were having trouble maintaining the old way of doing things at the scale the world demands. As long as millions of new users adopt the internet every year this complexity is only going to get worse. The world of 2030 doesn't exist without kubernetes and rust and lambda imo...for better or worse it's going to keep getting complicated.

I don't think we're screwed.

I think there's a lack of theory for software complexity. Complexity is a loaded word with several definitions, so when I used it here I mean software complexity in the sense that we don't have a theory to explain how things should be modularized and grouped. In addition to just modularizing things and grouping things, How you group and modularize protects your code from future technical debt, but each modularization may also come with an associated performance cost. There just isn't a theory that unionizes all of these things together. There isn't even a theory that explains just the modularization part without the performance cost.

When we don't have a theory for something we have a word for how we operate in this realm. "Design." This sort of stuff still exists in the realm of design. Anything that lives in the realm of design is very hard to fully optimize. Industries that incorporate the word "design" tend to move in trends which can ultimately be repeating circles because each change or paradigm shift is a huge unknown. Was this "design" more optimal then the last "design"? Is modern art better than classic art? Who knows? In fact the needle can often move backwards. The actual answer may be "yes", the current design is worse then the last design, but without a quantitative theory giving us a definitive answer we don't fully know, and people argue about it and disagree all the time. There are art critics but there are no math critics.

Take for example the shortest distance between two points. This is a well defined problem and mathematically it's just a line. You don't "design" the shortest distance between two points. You calculate it. This is what's missing from software. Architecture and program organization needs to be calculated not designed. Once we achieve this, the terms "over engineering" and "design" will no longer be part of the field.

If you squint you can sort of see a theory behind software architecture in functional programming. It's sort of there, but FP doesn't incorporate performance costs into the equation. Even without the performance metric, it's very incomplete, there is still no way to say one software architecture is definitively better than another. There may never be a way. Software may be doomed for a sort of genetic drift where it just constantly changes with no point.

The complexity of software will, however be always bounded by natural selection. If it becomes too complex such that it's unmaintainable, people will abandon the technology and it will be culled from the herd of other technologies. So in terms of being "screwed" I think it's fine. But within the bounds of natural selection, there will always be genetic drift where we endlessly change between technologies and popular design paradigms.

https://www4.di.uminho.pt/~jno/ps/pdbc.pdf

Nim is easy to use but performance.

I'm releasing a (web) development platform for it this month. Just getting the code ready.

s.gif
I don't believe the issue being addressed is a lack of web development platforms here.
s.gif
The author mentioned CRUD with Rust, which sounds like they think Rust is overkill for CRUD.
Sounds like the place where you work is going bad. Companies have this lifecycle where they start off small and scrappy with a good product, then if they succeed, become big and bloated and bureaucratic, where the product doesnt matter any more.

I'd suggest go work somewhere else.

I would argue with notion that this complexity is unnecessary.

But most importantly — if it really is, as you say, you can profit of that. Open a consultancy, and solve client's problems without using these over-engineered solutions. If your competition truly wastes a lot of time, then you would be able to solve the same problems faster and in a more effective manner.

s.gif
That is a nice theory, but then you need to do sales, and if the people in charge are swayed by buzzwords like microservices and monorepos, and your marketing language only goes as far as, say, "reliable, proven technologies", you'll be passed by.
If we didn't artificially complicate everything, people would realize that most programming isn't fundamentally very complicated and isn't worth a six-figure salary.
There is a pattern that has been bothering me that I think feeds in to this but I haven't had the time to fully flesh out an essay on it yet.

We as a profession spend a lot of time _solving the same problems_. This isn't necessarily a bad thing, different implementations allow for specialization in unique but important ways. Where I think we've gone wrong is that we can no longer generically re-use a lot of code between code bases because a lot of those libraries are written in dead-end languages.

What I'm referring to as dead-end languages are any programming language where you can't use library code independently outside of its ecosystem. Golang, Erlang, Javascript, Python, Ruby, the entire Java land of languages are all one big ball of intertwined dead end ecosystems, even Rust to a lesser extent. Any library written in one of those languages is locked in to that ecosystem and will never have a chance at becoming a generic foundational building block for systems outside their ecosystem.

One of the reasons we're even able to rapidly build so many complex systems is the foundational libraries like libcurl that have "solved" a problem well enough and is reliable enough that it is effectively an easy default decision to use them. These are libraries that have more or less solved some hard problems sufficiently that other engineers can mental model them away without knowing the implementation or protocol details.

I've seen others compare these modern methods and tools to old-school in-house one-off development and how difficult that made things. This is the same effect but rather than lock in at a company level, its lock in at a language or library level (don't get me wrong this is generally better than random in house one-offs). If you're familiar with the Golang net/http package that mental model can't be transferred to another language and there is no way to expose that functionality to a language other than Golang due to how the language itself is designed.

As frustrating, old, decrepit, and unsuitable for a lot of things as the C ABI is, any language that can produce a library that exposes its functionality using the C ABI is table stakes right now to avoiding the sprawling landscape of language lock-in. Even in languages that support exporting libraries using the C ABI, there is always that concept of 'other' that seems so problematic to me. It's not _bad_ writing a library in Rust, but that boundary between Rust-land and the other is uniquely un-interoperable or overly repetitive in its own ways requiring layers of abstraction and special behavior to work. For example if you have two separate system libraries written in Rust that do all their work behind the scenes using tokio are they actually going to be sharing that runtime? No. There is no common libtokio.so file on the system, no cooperation or resource management between the two, and no common library to update if a security vulnerability gets detected (for the pedantic, I'm referring to pre-compiled distributed libraries as a system building block not the common source you can compile on your own). This specific problem of bundling specific versions into the compiled artifacts makes inconsistencies in the systems running the code have a lesser effect, but you end up having to deal with log4j like situations where you're entirely dependent on the packager, maintainer, or vendor to handle your security updates and trust that they got it right.

I think one of the big reasons we're experiencing this spiral of complexification comes from the fact that we're not generating those foundational building blocks any more. There is no refinement tuning out the complexity of the system and distilling best practices into library defaults. There is no common underpinnings being generated that can be maintained, understood, and diagnosed system-wide. We can't reason about this utterly shattered set of walled ecosystems.

You're not wrong. The problem is the fancy stacks actually solve some stuff, but they bring their own disadvantages with them. No-one is denying that kubernetes is extremely powerful and you might need that in some use cases, but then you're suddenly writing operators for it, instead of business logic for your customers.

I think the problem is that things we're building stuff to solve specific problems, and then expanding each of those tools until they become massive and need other tools to help them. So docker solved a problem but then it created problems that you need kubernetes for, and so on.

One of the reasons I'm working on darklang is that I think the root cause of this complexity is solvable. The solution, in my opinion, is to build tools that cover multiple layers of the stack - that removes join points where you might be tempted to customize.

For example, firebase covers multiple layers, you might otherwise need a DB, a connection pooler, a firewall, an api server, an autoscaler for the api, a load balancer, etc. But instead, the only surface area you have is the firebase API. There's lots of similar tools that cover multiple layers of the stack like this, netlify, glitch, darklang, netlify, and prisma are some examples.

You're mixing up a lot of things. Overall, things are often born out of a need. The problem start when it's not a business need but a career advancement (or other political) need. Think about React. We'd be probably better off as an industry if that wasn't so popular. I hope someone at FB got their promotion for building that massively complicated framework - and I hope they learned what KISS means after they read the codebase of Preact, which achieved the same API with a fraction of the code.

Using Go or Rust instead of Python is not inherently more complicated, it's just a different language.

NoSQL is not complicated but it's fairly useless for most of its users (despite being so popular). At the same time, it has its uses for companies that need massive scale (think Google, not your average startup).

Kubernetes is fairly complicated but it can be the easiest option (even if it's not the most resource efficient) to do something because of the ready made tools available for it.

Don't worry anyway, we haven't screwed up ourselves, we just created tons of artificial work we can spend our employers' money on and that we can use to inflate our cvs and possibly land some more money in the next role.

When you build your own company, be conscious of this, and just use jQuery and PHP like Pieter Levels does.

The internet blew the doors off conventional business. Instead of a local community, you're now exposed to 5 of the 8 billion people in the world. Most of the increase in complexity is coming from corporations. They have a tendency to run with a lot of inefficiency. New technologies are going to be introduced at an ever more rapid pace and fragmentation will only increase. I wouldn't say we're doomed, quality of life around the world is rapidly improving. It's a lot to stay on top of and overwhelming to be sure.
We're not programming like Turing award winners. We're not treating software as the science to make a product but as the mechanics to put it together and patch it.

We should be less original and try to copy mathematics - their theorems are valid for thousands of years. Our codebases last maybe a decade.

> Rust for CRUD apps?

Are all CRUD API’s insensitive to performance? Correctness?

s.gif
No, but Rust is a low productivity, high formality language; one needs to make a tradeoff whether it's really that important. Given that most CRUD services are just a layer on top of a database.

This is the reality of software development; you have a budget and you have a goal. If all your budget goes up on making it correct without finishing it, you have a problem.

Anyway, it's just a CRUD app, it doesn't have to be as formal.

s.gif
>No, but Rust is a low productivity, high formality language;

Setting aside performance (Python is slow as molasses) that still requires some qualification because there is the 'maintenance' dimension. I would take Rust over Python for CRUD any day, because the 'formality' is a feature not a bug for maintenance. I would take Java/C# over Rust because the former balance performance and formality very well. In fact, I wouldn't use a dynamically typed language like Python or Ruby for backend infrastructure code if there were any performance or long-term maintenance requirements.

Given that we profit from greater need to write code, maybe this is annoying, but otherwise keeps us in demand.
YES. The software industry produces enough unnecessary complexity to keep everybody busy.

Furthermore, it's becoming more and more hype/marketing driven.

Solutions are adopted because they are popular or "cool". CV-driven development is becoming the norm.

The opportunity is to locate experienced software engineers from before this nonsense, and create new, efficient software that runs circles around the new modern complexity.

My last employer did this: a facial recognition developer in the enterprise space; where every competitor in the industry has a server stack for their solution, we had a single integrated application replacing the entire competitor stack, and the entire thing can run on an Intel Compute Stick perfectly fine. The kicker is such a solution is exponentially less expensive to own and operate, and is exponentially less expensive to create because the types of people with these tight skills cannot find work, they are burned out game developers with extremely high optimization and complex simulation experience. They look at the complex world of web/mobile/modern development and simply want to cease writing code. I find them and we create enterprise killers.

s.gif
good luck selling something inexpensive to enterprises

department heads need a reason for big budgets

It’s hard for managers sometimes to keep this stuff out even if they want to. Engineers like to play with tools and always always always over-engineer.

I try hard to walk my talk here but I catch myself doing it too. Simplicity is much harder than complexity. It requires more thought and deeper conceptual integration. Right now I am rethinking some older things and trying very hard not to second system effect it.

On top of this you have an industry pushing this stuff and cloud vendors who love the added cost it brings from managed services and more overhead. Cloud makes money off complexity. Makes it harder to move too which improves lock in.

Lastly you have the fact that our industry is cash flush. There has been little need to trim the fat. Just raise more VC or add more billable SaaS or… we’ll crypto comes with its own casino revenue.

A few months I found out that front-ends in separate git repos aren't cool anymore - it's back to monorepos now: https://rushjs.io/
> where is our software industry heading?

It was never our industry. There was a brief window ~2008-2010 where software engineers had a lot of power within their orgs, but at the end of the day we were always the laborers, never the owners of this industry.

Capitalist loathe a monopoly on skill, it gives labor a dangerous amount of leverage.

The people who own our industry, mostly venture capitalists and other investors are interested in capturing value at all costs and limited this power that engineers had.

This was the drive behind "MVP" and "ship it!" cultures that are partially responsible for this mess. But complexity is also valued by management because it reduces the ability of an individual engineer to have an impact, thereby reducing their monopoly on skill. In addition we've seen an industry pop up which focuses exclusively on rushing in new, minimally skilled, looking to make a quick buck devs. These are people only know how to fiddle knobs in a big complex machine.

This is also why the hiring process is even more awful today than it was a decade ago. A decade ago anyone who was passionate about programming and had a github repo willed with cool projects could get hired. This has been transformed into a machine that seeks to make sure every engineer is the same, trained only to pass a series of algorithmic puzzles from leetcode and hacker rank. These are even different than what they emulate: the old google challenges where hard but given by devs who knew what they were doing. Half of the algorithm puzzles I've been given in recent years are clearly by devs who only understand what the answer is, but don't really have any deeper insight into the problem.

> are we doomed forever?

Only until this latest wave of tech (it's not really a bubble) crashes. Once demand for software skill plummets then it will likely be like the dotcom burst valley of 2004-2010. The only people doing software where people who cared about it, and because salaries crashed many good engineers found other niches they could apply their skills in. That's when you saw some really interesting problem solving going on in the field.

In an inflationary environment, this is fine. Just wait when we have interest rates at 12%.
s.gif
This comment is irrelevant to the point of being a non-sequitur
s.gif
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK