3

Ask HN: What's your "it's not stupid if it works" story?

 4 months ago
source link: https://news.ycombinator.com/item?id=38733282
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Ask HN: What's your "it's not stupid if it works" story?

Ask HN: What's your "it's not stupid if it works" story?
87 points by j4yav 8 hours ago | hide | past | favorite | 101 comments
Cursed hacks, forcing proprietary software to do what you want through clever means, or just generally doing awful, beautiful things with technology?
15+ years ago, I was working on indexing gigabytes of text on a mobile CPU (before smart phones caused massive investment in such CPUs). Word normalization logic (e.g., sky/skies/sky's -> sky) was very slow, so I used a cache, which sped it up immensely. Conceptually the cache looked like {"sky": "sky", "skies": "sky", "sky's": "sky", "cats": "cat", ...}.

I needed cache eviction logic as there was only 1 MB of RAM available to the indexer, and most of that was used by the library that parsed the input format. The initial version of that logic cleared the entire cache when it hit a certain number of entries, just as a placeholder. When I got around to adding some LRU eviction logic, it became faster on our desktop simulator, but far slower on the embedded device (slower than with no word cache at all). I tried several different "smart" eviction strategies. All of them were faster on the desktop and slower on the device. The disconnect came down to CPU cache (not word cache) size / strategy differences between the desktop and mobile CPUs - that was fun to diagnose!

We ended up shipping the "dumb" eviction logic because it was so much faster in practice. The eviction function was only two lines of code plus a large comment explaining all the above and saying something to the effect of "yes, this looks dumb, but test speed on the target device when making it smarter."

s.gif
Similarly, a modder recently found that unrolling loops _hurt_ performance on the N64 because of RAM bus contention: https://www.youtube.com/watch?v=t_rzYnXEQlE
s.gif
Those are my favorite functions! Two lines of code with a page of text explaining why it works.
I implemented an enterprise data migration in javascript, running in end-user's browsers. (So no server-side node.js or such.)

It was a project scheduled for 2-3 months, for a large corporation. The customer wanted a button that a user would click in the old system, requesting a record to be copied over to the new system (Dynamics CRM). Since the systems would be used in parallel for a time, it could be done repeatedly, with later clicks of the button sending updates to the new system.

I designed it to run on an integration server in a dedicated WS, nothing extraordinary. But 3 days before the scheduled end of the project, it became clear that the customer simply will not have the server to run the WS on. They were incapable of provisioning it and configuring the network.

So I came up with a silly solution: hey, the user will already be logged in to both systems, so let's do it in their browser. The user clicked the button in the old system, which invoked a javascript that prepared the data to migrate into a payload (data -> JSON -> Base64 -> URL escape) and GET-ed it in a URL parameter onto a 'New Record' creation form into the new system. That entire record type was just my shim; when its form loaded, it woke another javascript up, which triggered a Save, which triggered a server-side plugin that decoded and parsed the data, which then processed them, triggering like 30 other plugins that were already there - some of them sending data on into a different system.

I coded this over the weekend and handed it in, with the caveat that since it has to be a GET request, it simply will not work if the data payload exceeds the maximum URL length allowed by the server, ha ha. You will not be surprised to learn the payload contained large HTMLs from rich text editors, so it did happen a few times. But it ran successfully for over a year until the old system eventually was fully deprecated.

(Shout out to my boss, who was grateful for the solution and automatically offered to pay for the overtime.)

We have a production service running for years that just mmaps an entire SSD and casts the pointer to the desired C++ data structure.

That SSD doesn't even have a file system on it, instead it directly stores one monstrous struct array filled with data. There's also no recovery, if the SSD breaks you need to recover all data from a backup.

But it works and it's mind-boggingly fast and cheap.

s.gif
I've always wanted a Smalltalk VM that did this.

Eternally persistent VM, without having to "save". It just "lives". Go ahead, map a 10GB or 100GB file to the VM and go at it. Imagine your entire email history (everyone seems to have large email histories) in the "email array", all as ST objects. Just as an example.

Is that "good"? I dunno. But, simply, there is no impedance mismatch. There's no persistence layer, your entire heap is simply mmap'd into a blob of storage with some lightweight flushing mechanic.

Obviously it's not that simple, there's all sorts of caveats.

It just feels like it should be that simple, and we've had the tech to do this since forever. It doesn't even have to be blistering fast, simply "usable".

s.gif
Wow. How do design decisions get made that result in these types of situations in the first place?
s.gif
If I had to guess:

Doing it this way = $

Doing it that way = $$$

As a 12 year old: I tried to overclock my first "good" own computer (AMD Duron 1200 MHz). System wouldn't start at 1600 MHz and I didn't know BIOS reset exists. I ended up putting the computer in the freezer and let it cool down for an hour. I placed the CRT display on top and the power/VGA keyboard cable going into the freezer. I managed to set it back to the original frequency before it died.
s.gif
I kept a supply of coins in the freezer. I would regularly toss a few into the heatsink on my TRS-80 that was unstable after a RAM upgrade.
s.gif
When I was a teenager my friend would throw his laptop into the freezer for a few minutes every hour when we were playing games. He probably threw it in there hundreds of times, and it worked fine for years.
s.gif
I don't know why but this reminds me of how we picture-framed my friend's old Wifi chip after replacing it, because that chip failing all the time was basically the core feature of our group's gaming sessions.
Mine's getting command output out of docker. For long builds (I had one that took 4 hours), it was gutting to have it fail a long way in and not be able to see the output of the RUN commands for thorough debugging.

So I devised a stupidly simple way: add && echo "asdfasdfsadf" after each RUN command. I mashed the keyboard each time to come up with some nonsense token. That way, docker would see RUN lines as different each time it built, which would prevent it using the cached layer, and thus would provide the commands' output.

I wrote the same thing (more completely) here: https://stackoverflow.com/a/73893889/5783745

(a comment on that answer provides an even better solution - use the timestamp to generate the nonsense token for you)

As stupid as this solution is, I've yet to find a better way.

s.gif
I'm a Docker amateur, so this will be a dumb question, but if you were using that technique after every run line in a DockerFile, wouldn't they be the same every time they're run? Like, it's random, but it's the same random values stored in the file, so wouldn't the lines get cached? Or did you adjust the DockerFile each time?

Or am I completely missunderstanding?

s.gif
> Or am I completely missunderstanding?

No, I just didn't explain it very well. You have to mash the keyboard each time (i.e. each build) to come up with some new token. The reason this (dumb) idea was so useful was it was a choice between that (dumb idea) and either run with --no-cache (i.e. wait 2-3 hours) or build normally and not have a clear idea why it failed (since no console output for cached layers), so taking a moment to mash the keyboard in a few places (as absolutely stupid as that is) was way better than the alternative of not having complete console output (docker provides no way to --no-cache on a per layer basis, hence my stupid way of achieving it).

I had an old boiler that would sometimes trip and lock out the heat until someone went down and power cycled it. (It was its own monstrous hack of a gas burner fitted to a 1950s oil boiler and I think a flame proving sensor was bad.)

Every time it happened, it made for a long heat up cycle to warm the water and rads and eventually the house.

So I built an Arduino controlled NC relay that removed power for 1 minute out of every 120. That was often enough to eliminate the effect of the fault, but not so often that I had concerns about filling too much gas if the boiler ever failed to ignite. 12 failed ignitions per day wouldn’t give a build up to be worried about.

That ~20 lines of code kept it working for several years until the boiler was replaced.

s.gif
I have a similar one.

Our boiler has a pump to cycle hot water around the house - this makes it so you get warm water right away when you turn on a faucet and also prevents pipes in exterior walls from freezing in the winter.

This stopped working, the pump is fine but the boiler was no longer triggering it.

I just wired up mains through an esp32 relay board to the pump and configured a regular timer via esphome.

Temperature based logic would be even better but I didn't find a good way to measure pipe temperature yet.

s.gif
I eventually switched to an ESP32 and added temperature graphing: https://imgur.com/a/VM7nD74

IIRC, I used an RTD that I had left over from a 3D printer upgrade, but an 18B20 would fine as well. A 10K NTC resistor might even be good enough. For what I needed (and I think for what you need), just fixing the sensor to the outside of the pipe [if metal] will give you a usable signal. That sensor was just metal HVAC taped to the front cast iron door of the burner chamber.

But a dead-simple timer solution gets you pretty far as you know.

s.gif
The pipes are insulated and I didn't want to cut into that, but maybe a small hole for a sensor wouldn't be too bad.

But as you say, timer works good enough and that means little motivation to continue to work on it -- countless other projects await :)

BTW I've also tuned the timer to run for longer in the morning to get a hot shower ready.

Edit: nice dashboard, what are you using for the chart? I like the vintage look.

s.gif
That is another somewhat hacky thing.

I have a mix of shame and pride that the chart (everything in the rectangle) is entirely hand-coded SVG elements emitted by the ESP web request handler.

s.gif
Couldn’t that be achieved with a mechanical timer switch and zero lines of code ?
Around 16 years ago, Wordpress security was just not up to snuff yet, and my popular Wordpress-based site kept getting hacked by pharmaceutical spammers and the like. After several such incidents, I wrote a "wrapper" that loaded before Wordpress to scrutinize incoming requests before a lick of Wordpress code was executed. It had blacklists, whitelists, automatic temporary IP blocking, and that sort of thing. There was no reason for visitors to upload files, so any non-admin POST request with a file upload was automatically smacked down.

It wasn't pretty, but the hackers never got through again, and that clunky thing is still in service today. I coded it to quarantine all illicit file uploads, and as a consequence I have many thousands of script kiddies' PHP dashboards from over the years.

s.gif
That reminds me of my terrible spam prevention hack. We kept getting a bunch of spammers signing up for our newsletters, so I made the form require a JavaScript based hidden input to submit. That worked for a while, but then new spammers started executing the JS and getting through. So I added new JS that just waits 15 seconds before putting the right hidden values in the form, and that’s done the trick (for now).
I worked for a US media company that forced us to use a half-baked CMS from a Norwegian software company, with no apparent provisions in the contract for updates or support.

The CMS was absolutely terrible to work in. Just one small example: It forced every paragraph into a new textarea, so if you were editing a longer news story with 30 or 40 paragraphs, you had to work with 30 or 40 separate textareas.

So I basically built a shadow CMS on top of the crappy CMS, via a browser extension. It was slick, it increased productivity, it decreased frustration among the editors, and it solved a real business problem.

If we had had a security team, I'm sure they would have shut it down quickly. But the company didn't want to pay for that, either!

My favorite one is probably when I was working at a retail Forex where consumers would try to make money on currencies. There were a lot of support calls where they disputed the price they saw and the price their order was entered. My solution was to log the price when they click the trade button. The interesting bit wasn't that I logged the currency pair and price, instead I did a tree walk of all the Java Swing GUI elements in the open trade window and render them into the log file as ASCII using "(o)" for options, "[x]" for checkboxes, "[text_____]" for text fields, etc. I wasn't sure if it would work as the elements were rounded to the closest line, and sometimes just inserted a line between two others if it was close to half a line in-between etc.

The ASCII 'screenshots' came out beautifully. From then on when a call came it, we told them to use the view log menu item, scroll to the trade time, then they'd shut up quick. A picture is worth a 1000 words indeed.

I worked at a startup where the core backend was 1 giant serverless Function. For error handling and recovery, the Function ran in a while loop.

For all its faults, it worked and it was generating revenue. Enough that the startup got to a sizable Series A. That experience completely changed how I think about writing code at startups.

s.gif
"on error resume next" never died, it just became serverless!
s.gif
A customer of mine wraps their Python code into try except pass so it never stops because of errors, it just skips what would have run in the code after the exception. I added some logging so we're slowly understanding what fails and why.
s.gif
this is great. if you look at older game source code you find things like this. things that we view as horrible hacks which are both extremely stable and perform well.

i see no reason to stop using these types of solutions, when appropriate.

s.gif
Old games also didn't use a database, they saved everything in a giant text file.

I'm not sure if they were "extremely stable" though. Like Myspace, it might only work up until a certain point. What kills stuff is usually going viral.

s.gif
I think you maybe underestimate the utility and reliability of flat text files on a filesystem.

If you don’t trust a filesystem, you can’t trust anything that uses one.

Flat files don’t scale past a certain point, but that point is way higher than most believe it is.

I had a GCP Cloud Run function that rendered videos. It was fine for one video per request but after that it slowed to a crawl and needed to shut down to clear out whatever was wrong. I assume a memory leak in MoviePy? Spent a couple of days looking at multiple options and trying different things, in the end I just duplicated the service so I had three of them and rotated which one we sent video renders to and do each render one at a time. It was by far the cheapest solution, means we processed them in parallel rather than serially so it was faster, all in all it worked a treat.
s.gif
This reminds me of a service I recently found that was routinely crashing out and being restarted automatically. I fixed the crash, but it turns out it had ALWAYS been crashing on a reliable schedule - and keeping the service alive longer created a plethora of other issues, memory leaks being just one of them.

That was a structural crash and I should not have addressed it.

s.gif
How many memory leaks were discovered only during the winter code freeze, because there were no pushes being done, so no server restarts
s.gif
At Fastmail the ops team we ran fail overs all the time just to get our failures so reliable they worked no matter what. Only once in my tenure did a fail over fail and in that case there was a --yolo flag
s.gif
Oooh, you’ve just reminded me of the email server at my first dev job. It would crash every few days and no one could work out why. In the end someone just wrote a cron job type thing to restart it it once a day, problem solved!
s.gif
What you call a hack everyone else calls devops :-). You have higher standards!
My mom's place (about 100 miles from me) has a water heater that's of an age where it could fail, so I put together a Pico W and a water sensor. I had it notify me daily just to make sure it was still working. And for reasons unknown, every 8 days it would stop notifying. A reboot would resolve it. We tried logging errors and having it report upon reboot but I wasn't versed enough with Pi to figure out anything more than it being an HTTP POST error. So I changed the code so when it got to that error instead of logging it would just reboot itself, and all has been smooth since.
In the undergrad Control Systems course, I brute-forced Kalman Filter matrix to balance the inverted pendulum on a track experiment. Worked fine.
I "fixed" an appliance that was nuisance tripping an AFCI breaker by wrapping the power cord one turn through a ferrite choke.
Way back I had a friend that wanted his (maybe) "Sargon" chess program to run faster. Luckily it was on the Atari 8-bit and I knew a thing or two. The program seemed to use standard b/w hires graphics nothing super fancy, so I thought I could make a pre-boot loader.

The theory was that the Atari spends a good chunk (30%) of its time for display memory access. That can be disabled (making a black screen) and re-enabled. My pre-boot program installed a vertical blank interrupt handler reading the 2nd joystick port: up/down for display on/off. After installing the handler, the program waited for diskette swap and pretended to be the original program loader reading the disk layout into memory and jumping to the start. Worked like a charm first go.

My CPAP's onboard humidifier failed.

I ended up swapping it out to a generic in-line CPAP humidifier, but at the same time, realized I could partially automate the process of refilling the chamber (and not have to keep unhooking hoses) by adding an in-line oxygen tee, some aquarium plumbing, a check valve, and a 12 volt pump and switch.

In the morning I just hold a button and the tank magically refills itself ;)

Introducing Semi-Autofill(tm): https://i.ibb.co/NmDbVvw/autofill.png

(Also: The Dreamstation, while recalled, was personally de-foamed and repaired myself -- I don't trust Philips any further than I can throw them now. I now self-service my gear.)

My organization has a firewall policy straight outta the 90s. They'll only allow for for static IP to static IP traffic rules over single ports. This is in conflict with modern cloud CICD where you don't know ahead of time what IP you're gonna get in a private subnet when doing a new build.

Our work around was to configure HA proxy to be a reverse load balancer and do creative packet forwarding. Need to access an Oracle database on prem? Bind port 8877 to point that that databases IP on port 1521 and submit a firewall rule request.

When I was a teenager I had a friend who wanted to build a PC on a very limited budget, and she wanted it to be able to play The Sims 2. Well after much bargain hunting and throwing ideas around, we couldn’t find a way to afford every component we needed, but we were close, so we decided to forgo a case! Just put the motherboard on the desk with other components arrayed around it. Cables everywhere. The tricky part is we had no power button, but I showed her which pins to sort out with a paper clip, and it worked great.
A company I worked for had a website where you could order mobile phones and subscriptions from different providers. This was just a frontend, and behind the scenes, they just ordered them directly from those providers. But those providers had terrible sites still written for IE6 (this was in 2010 I think). And yet those sites where all they had (for some reason; I don't know the full background).

So what happened is: the customer would order their phone subscription on the front end, that would create a job file that would be sent to a scheduler that managed 10 Windows VMs that used a Ruby Watir script to direct IE6 to fill in the data from the job file on the old decrepit website.

It's the most horrific hack that I ever touched (I forgot exactly, but I had to make some adjustments to the system), but it worked perfectly for a couple of years until those providers finally updated their websites.

ZX Spectrum BASIC. Numbers could only be 8 digits, needed more for a Spacemaster RPG ship designer program I wrote for my friends. Came up with storing values as strings and splitting/manipulating them as numbers when required. About fifteen years old. Probably the smartest thing I have ever written, grin.
s.gif
Hey, sounds like most of my Advent of Code solutions :)
Launching a headless browser just to generate some PDFs.

Turns out, if you want to turn html+css into pdfs quickly, doing via a browser engine is a "works really well" story.

s.gif
I did the same. We had a tool that would let you export to pdf. That pdf would be sent to our customers. Initially we just used the print functionality in the users browser, but that caused output to vary based on the browser/os used.

People complained that the PDFs generated were slightly different. So instead I had the client send over the entire html in a post request and open it up in a headless chrome with --print-to-pdf and then sent it back to the client.

s.gif
I wrote a Python package [1] that does something similar! It allows the generation of images from HTML+CSS strings or files (or even other files like SVGs) and could probably handle PDF generation too. It uses the headless version of Chrome/Chromium or Edge behind the scenes.

Writing this package made me realize that even big projects (such as Chromium) sometimes have features that just don't work. Edge headless wouldn't let you take screenshots up until recently, and I still encountered issues with Firefox last time I tried to add support for it in the package. I also stumbled upon weird behaviors of Chrome CDP when trying to implement an alternative to using the headless mode, and these issues eventually fixed themselves after some Chrome updates.

[1] https://github.com/vgalin/html2image

s.gif
Yeah it's the same concept, instead of .screenshot you do .pdf in pupetteer.

But with pdfs the money is on getting those headers and footers consistent and on every page, so you do need some handcrafted html and print styling for that (hint: the answer is tables).

s.gif
I've implemented recently just the same thing, but for SVG -> PNG conversion. I found that SVG rendering support is crap in every conversion tool and library I've tried. Apparently even Chrome has some basic features missing, when doing text on path for example. So far Selenium + headless Firefox performs the best ¯\_(ツ)_/¯
s.gif
I've seen a bit of SaaS and legacy websites-with-invoice-system doing that, with e.g. wkhtmltopdf. It isn't a lightweight solution, but it's a good hammer for a strange nail, a lot of off-the-shelf report systems suck.
s.gif
I mean browsers are built for and the best at displaying html+css. Given that they are "living standards", very few other programs can hope to keep up.
I wrote it up in a bit more detail[1], so I'm giving away the punch line here, but I used to use some cursed bash wrappers to smuggle my bashrc and vimrc along on ssh sessions to mostly-ephemeral hosts by stashing them in environment variables matching the LC_* pattern allowed by default for pass-through in debian-ish sshd configs.

[1]: https://gitlab.com/-/snippets/2149340

I used to be really into modding the game Jedi Knight: Dark Forces II. The quirky engine has all sorts of weird bugs and limitations.

I created a flare gun weapon (similar to the stick rail gun missiles so nothing to crazy here) but found that if a player died when they respawned the flares were still stuck on them and damaging them when they respawned even though their whole location had changed. This bug would exist with rail gun missiles as well but since the death animation was long and the fuse so sort it would never present in the base game.

I experimented with using detach commands that ran on player death but they'd just instantly reattach to the player model because of their proximity. I ended up creating an invisible explosive entity that fired on player death from the center of the player which did a damage flag ignored by players but which destroyed the flares.

It's not the best story - I'm sure there are some greats here - but I tricked GitLab into running scripts that looked like https://gitlab.com/jyavorska/c64exec/-/blob/master/.gitlab-c... by modifying a runner to pass everything through the VICE Commodore BASIC emulator. It would even attach the output file as an artifact to the build.
s.gif
A small contribution to the increase of nonsense in the world
I used a WiFi smart switch and a USB thermometer to make a sous vide cooker. I plugged a slow cooker into the smart socket, put the thermometer in it, and wrote a program to turn the switch on/off depending on the temperature the thermometer registered.
I used SQLite for coordination between processes. It was a huge Python application that originally used the multiprocessing library and had to be migrated to Rust.

In hindsight, it would have been better to use a local HTTP server. Seemed like overkill at the time.

I have centralized AC and the wall-mounted control panel is located in a small storage room. I wanted to hack the control panel with an Arduino and a Raspberry pi so I can control it remotely via my Alexa. I ended up buying a switch bot [0] and an IP camera and was done with it.

0: https://www.switch-bot.com/products/switchbot-bot

Running an industrial machine installation and my Eastern European colleagues looped a 200 meter tape measure around the line 4 times to get a more accurate measure.
Needed to get data out of a CRM system for specific printed orders - when it was printed, who processed it, what was on the order etc.

The process of authenticating with the CRM was complex and there wasn't a way to get anything at print time and most of the data was stored all over the place.

But I found the printed report knew almost everything I wanted, and you could add web images to the paperwork system. So I added a tiny image with variable names like "{order_number}.jpg?ref={XXX}&per={YYY}" and then one for each looped product like {order_number}/{sku}.jpg?count={X}&text=..." etc. After a few stupid issues (like no support for https, and numbers sometimes being european format) it was working and has remained solid ever since. Live time stamped data, updates if people print twice, gives us everything we wanted just by a very silly method.

I had to connect an old accounting system to a web app with enhanced ui (an operator determines a payment on a visual graph of contracts between companies, plus graphs editor). There were two ways: a separate db with periodic sync, and a direct COM-connection to the old app, which was scriptable through js<=>COM library. I chose the latter, tests worked fine.

After a month or so I started to notice that something is wrong with performance. Figured out that every `object.field` access through a COM proxy takes exactly 1ms. Once there’s enough data, these dots add up to tens of seconds.

>_<

Instead of doing a rewrite I just pushed as much of js logic as possible beyond the COM, so there’s only a constant or little amount of `a.b.c` accesses on my side. Had to write a json encoder and object serialization inside the old app to collect and pass all the data in one go.

The web app was abandoned few months later for unrelated reasons.

Generated HTML email newsletters from Excel (in 2004).

It was a big old-fashioned bookseller trying to compete with Amazon. Software and the web was locked down tight, but they opened a daily report in Excel, and I built a VBA macro that generated the necessary HTML and published the images to an FTP server. Turned a 2 day job into a 10 minute one.

`envsubst` on a k8s manifest, for templating. The space for templating/dynamic k8s manifests is complex, needlessly so I felt. But this... just works. It has been running in CI for a couple months now, deploying to prod. I'm sure the day that breaks down will come, but it's not here yet.
`sed` text files as a replacement for templating.

In the text file you have something you want to template (or "parametrize") from an outside variable, so you name that something like @@VAR@@ and then you can sed that @@VAR@@ :-)

s.gif
That's how we do it. Not with sed exactly, but string replacement. One is a bull email sender that only supports VBScript, the other is C#, but the users aren't supposed (or need) to have full templating powers, so this way it's easier.
s.gif
Wait, you're telling me this isn't a Best Practice™?
Mid 90s, can't remember the tech (VB, C, Java?) In the very last hours before an important demo one of the programs stopped working. Not always, only every second time we run it. No version control, no unit tests. It's obviously some side effect but debugging it before the demo and making changes could make it worse. Maybe it won't even run anymore, anytime. We decide to wrap it into a script that starts it, kills it, runs it again. That worked and made us pass the demo.
I built a vacation plant waterer with some tubing, 3d-printed heads, submersible pumps and an Arduino. For a longer trip, I needed a source of water that I could pump from.

I realized the toilet tank is self-refilling because of the float valve and won't overflow . So clean it out and a good place to pump my submersible pumps.

When I was a network support engineer, we had a case where a company had a bizarre & intermittent problem on workstations. Wish I could remember what the problem was but this is 20+ years ago now.

To troubleshoot it, we installed Microsoft Network Monitor 2.0 (this was well before Wireshark...) on a few workstations. NM2 installed a packet capture driver and a GUID front-end. And...the problem went away.

Our best guess was that the problem was some sort of race condition and installing the packet capture driver was enough to change the timing and make the problem go away. The customer didn't want to spend more time on it so they installed NM2 everywhere and closed the case.

I occasionally imagine somebody trying to figure out why they're still installing the NM2 driver everywhere.

TL;DR: Had no database so I made a PHP page with a hardcoded array of 100,000 coupon codes.

Made a PHP landing page for a customer where they could redeem a coupon code they were sent via snail mail. About 100,000 codes were sent out via USPS.

Threw together the basic code you might expect, simple PHP page + MySQL database. Worked locally because customer was dragging their feet with getting me login creds to their webhost.

Finally, with the cards in the mail, they get me the login creds at 5PMish. I login and there's no database. Cards are going to be arriving in homes as early as 8AM the next day. How TF am I going to make this work... without a database?

Solution... I just hardcoded all 100,000 codes into a giant MySQL array. Or maybe it was a hash/dict or something. I forget.

Anyway, it performed FINE. The first time you used the page it took about 30 seconds to load. But after that, I guess `mod_php` cached it or something, and it was fine. Lookups returned in 100ms or so. Not spectacular but more than performant enough for what we needed.

Got paid. Or, well, my employer did.

This was a while ago so it probably wouldn’t work today.

I had to get past a captcha for automation and the solution I came up with was to always choose 2. If it was incorrect, just request a new captcha until it passed. For some reason, 2 was the answer most of the time so it actually rarely had to retry anyways

s.gif
Definitely wouldn't work today. Nowadays you need to classify like 30 images of bicycles and 20 fire hydrants and pray to god before they accept your answer...
s.gif
This is why I don’t have an account with Snapchat/Instagram/etc. I tried signing up and physically couldn’t get past the challenges. I take too long to solve them and then I’m asked to solve more…
s.gif
Sometimes if they hate your client and ip they put you into a captcha tar pit that you only think you could get out of. Only a bot would keep trying but a human will die in there too even if they have the tenacity of a bot.
One of the first things I built as a developer at the first startup I worked for (circa 1998 or 1999, I was originally hired as a graphic and web designer) was a system I wrote in Allaire ColdFusion that used Macromedia Flash Generator to render and save custom graphic page headers and navigation buttons for e-commerce websites by combining data stored in an Access database with Flash templates for look and feel.
Getting a ddos attack and just running a few iptables by hand mostly fixed it until upstream blocked it for us
I was part of a team which had to make web interactives for an old desktop-only Java-based CMS which published out to HTML. This was back before cross-publishing to formats like Apple News was important; we only had to worry about things working on the browser version.

The CMS didn't support any kind of HTML/JS embed, and had quite a short character limit per "Text" block. But luckily, that block didn't filter out _all_ inline HTML - only some characters.

So, a bootstrap "Loading" element was inserted, along with a script tag which would bring in the rest of the resources and insert those in to the page where the bootstrap was placed. This quickly became a versatile, re-usable loader, and allowed us to launch the features. But all this grew from a very inelegant hack which just happened to work.

Monkey patching vendor code. They agreed their code didn’t work and produced wrong results, but the correct version would be slower, so they didn’t want to change it.

So I dynamically replaced the part of their code that was wrong. That monkey patch has run years and is still going :)

s.gif
Who cares if it's wrong, as long as it's fast?
I use a table object and a OnAfterModifyRecord trigger to process OData calls in Navision 2018 (ERP System). For some reason I can not call actions manually so I write whatever I want to do into a table and process accordingly with triggers.
Adding a Content Security Policy of “upgrade-insecure-requests”. It does nothing meaningful for your security, but it’s enough to satisfy a bunch of these scanning tools that giving you a letter grade.

Yes, we want to add a robust CSP, but we currently have some limitations/requirements that make implementation more challenging.

Define works? Ive seen stupid and not working but convinced it's working until proven otherwise...

I use to work part time restoring Fiat, Porsche, and VW rares for an old head out in the mid west, lots of "stupid but works" in those old cars... Mercedes Benz once (1980s or so) employed glass containers to solve for fuel pressure problems. Insane coolant loop designs or early Fuel Injection systems that develop "Ghosts" lol...

A bunch of SQL triggers and procedures to overcome software limitations and workaround certain bugs which the developers won't fix (3rd party app).
s.gif
Reminds me when we started implementing features as an Oracle trigger. It was meant to be “just a trigger” but then there are so many edge cases when you do an end—run around application code that it took a couple of week total. Boss was like “couple of weeks for a trigger!”
In a previous role, I automated an unholy amount of business processes by adding doGet() / doPost() handlers to expose google sheets as basic web services. It's a bit slow for large sheets, but was quite nice to work with and troubleshoot, and the built-in history in google sheets allowed me to experiment with little risk of data loss/corruption.
s.gif
Investment banking analysts: “this is a hack??”
I've been using sshuttle to create a VPN to my server. It's a wonderful abuse of ssh.
TL/DR: I rearranged the address lines on an embedded controller with a razor knife to "fix" a bug on a bus translation chip.

I was writing the motor controller code for a new submersible robot my PhD lab was building. We had bought one of the very first compact PCI boards on the market, and it was so new we couldn't find any cPCI motor controller cards, so we bought a different format card and a motherboard that converted between compact PCI bus signals and the signals on the controller boards. The controller boards themselves were based around the LM629, an old but widely used motor controller chip.

To interface with the LM629 you have to write to 8-bit registers that are mapped to memory addresses and then read back the result. The 8-bit part is important, because some of the registers are read or write only, and reading or writing to a register that cannot be read from or written to throws the chip into an error state.

LM629s are dead simple, but my code didn't work. It. Did. Not. Work. The chip kept erroring out. I had no idea why. It's almost trivially easy to issue 8-bit reads and writes to specific memory addresses in C. I had been coding in C since I was fifteen years old. I banged my head against it for two weeks.

Eventually we packed up the entire thing in a shipping crate and flew to Minneapolis, the site of the company that made the cards. They looked at my code. They thought it was fine.

After three days the CEO had pity on us poor grad students and detailed his highly paid digital logic analyst to us for an hour. He carted in a crate of electronics that were probably worth about a million dollars. Hooked everything up. Ran my code.

"You're issuing a sixteen-bit read, which is reading both the correct read-only register and the next adjacent register, which is write-only", he said.

Is showed him in my code where the read in question was very clearly a CHAR. 8 bits.

"I dunno," he said - "I can only say what the digital logic analyzer shows, which is that you're issuing a sixteen bit read."

Eventually, we found it. The Intel bridge chip that did the bus conversion had a known bug, which was clearly documented in an 8-point footnote on page 79 of the manual: 8 bit reads were translated to 16 bit reads on the cPCI bus, and then the 8 most significant units were thrown away.

In other words, a hardware bug. One that would only manifest in these very specific circumstances. We fixed it by taking a razor knife to the bus address lines and shifting them to the right by one, and then taking the least significant line and mapping it all the way over to the left, so that even and odd addresses resolved to completely different memory banks. Thus, reads to odd addresses resolved to addresses way outside those the chip was mapped to, and it never saw them. Adjusted the code to the (new) correct address range. Worked like a charm.

But I feel bad for the next grad student who had to work on that robot. "You are not expected to understand this."

Sshfs so I can upload images onto my server to send links. :|
I made my own version of AWS workspaces inside AWS because workspaces is a buggy piece of shit and the client sucks. It's just an EC2 instance which can be started and stopped by a makefile that runs awscli and I query the IP address and open it in MS RDP!
Wouldn't you like to know Microsoft powerbi team...

But for real I'd get fired if I said...

s.gif
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK