

Ask HN: If the Internet were redesigned today, what changes would you make?
source link: https://news.ycombinator.com/item?id=29053266
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Is DNS really a perfect protocol? How can it be improved?
Facebook was famously started and hosted in a dorm room. But this was only possible due to the history of Harvard within the advent of the internet and the fact that they had such an excess of addresses that Zuck could bind to a public IP address. We’ll never know what tiny services could have blown up if people didn’t hit this wall.
I started off with computers by hosting garrysmod servers. My brother started off with computers by hosting a website dedicated to the digital tv switchover in Wisconsin (lol). This was only possible because my dad was a software engineer and paid a bit extra to get us 5 dedicated IP addresses. If he didn’t understand that, who knows what me or my brother would be doing today.
Anyway, I say IPv6.

I'm fairly certain the version of Facebook that was hosted from Zuckerberg's dorm room was just for Harvard students, and wasn't accessible from outside the campus network. Keep in mind that early FB was rolled out to only select universities on a campus-by-campus basis over the course of a year or two; it wasn't like it was today. Part of the whole appeal of FB early on was its exclusivity.
There were and are lots of places with routable IPv4 addresses that still have various kinds of traffic management and firewalling. My uni handed out real IPv4 addresses in the early 2000s (may still today!), but absolutely didn't allow inbound connections from anywhere outside of the campus network, at least not on well-known ports. You could (and lots of people did) run a server, SMB or AppleTalk file share (so much porn...), etc., but it wasn't accessible to the entire Internet. (Hotline and Carracho servers, OTOH...) I would be absolutely astounded if Harvard didn't have some inbound filtering on its network at the time; keep in mind this was 2004: peak Windows XP era... students would have been getting hacked left and right if they hadn't.
There are still some big companies around with very large IPv4 allocations for historic reasons (HP has at least two /8s I believe, its own original one plus one acquired from DEC; IBM has at least one; Apple has one, etc.) and some of them use routable addresses internally. I know IBM did this in its major offices in the mid to late 2000s. But you couldn't just spin up a server at your desk and hit it from home without going through IT and having them put in a firewall rule for you. This was all pretty standard network security stuff at that point.

It was originally hosted on Harvard's servers, and lasted only a few hours before the administration pulled the plug on facemash, which was basically a 'hot or not' clone.
Then they rented a server for $85/mo. and launched thefacebook.com a few weeks later.
Both sites were on the public internet.
https://www.fastcompany.com/59441/facebooks-mark-zuckerberg-...

If I could fix anything, it would be IPv6 itself. The biggest thing preventing it's widespread adoption is it's complicated nature.
An IPv4 with an extra octet or two would have seen complete adoption years ago.

No new/subsummed functionality. Stick with ARP and DHCP, etc.
A better / more actionable upgrade path and plan to get things working so most devices could go v4.1 only quicker, etc.


ipv4 is a mess in many ways, ARP for instance, is a disaster once you reach a certain scale and many of the ipv4 header fields are unneeded leading to inefficiencies. multicast on ipv4 is a mess, rfc1918 is a neat idea but ipv6 fixes it in a far better way and gets rid of NAT in the process. (and no, NAT does not increase security, ipv6 still has firewalls !)
in my opinion, ipv6 is the far simpler protocol, it is the migration from ipv4 to ipv6 which is resulting in all this complexity, but it is not the fault of ipv4.
and before people ask why not just extend the address space... that only solves half of the problems ipv4 has and not to mention, results in having the same dual stack situation as we have right now. extending address space is simply not possible in a backwards compatible way. and if we have to break compatibility, we might aswell fix all the other issues with ipv4.


IPv6 tried to solve too many problems at once. We should've just focused on solving the address exhaustion issue. Instead, we have a very slow roll out of IPv6, and awful stuff like CG-NAT taking permanent hold.


It's not widely implemented because hardware support is lagging behind (Ubiquity being a notorious example in the prosumer space), hardware support isn't being developed because it's not rolled out widely, software support is lacking because of a lack of rollout which is then used as an excuse not to roll out IPv6.
I bet that if people learned IPv6 before they learned about IPv4 the conclusion would be that IPv4 is a mess. In my opinion, DHCP is a stupid protocol for assigning addresses that shouldn't have been necessary, but we've managed to staple some kind of management ideals on top of it (as if someone couldn't just set a static IP on their device) and using SLAAC feels like giving up control for some. Imagine trying to convince people that they have to set up a USBCP server on either their computer or their flash drive to make USB work without address conflicts, or to make Bluetooth work, they'll laugh at you and ask why that stuff isn't done automatically by the underlying protocols instead. DHCP is useful for many other settings, but address negotiation should've never been a problem it needed to solve in the first place.
We've accepted NAT as a fact of life because of ISPs being stingy to hand out addresses years ago when multiple devices started appearing in home networks and now people treat it like some kind of firewall (which it usually isn't!) or absolute necessity because they can't imagine something else.
Ask any console player about what type of NAT they have (NAT type 0? Type 1? Type 2? open? moderate? strict? I've never been able to figute out what these classifications even mean on a technical level!) and they'll shudder with flashbacks of getting basic connections to work with their crappy ISP router. This should never have been a problem, but everyone kept dragging their feet and eventually we decided to accept this mess.
I think part of the reason is that many schools still only teach IPv4 in their networking classes, so when people encounter IPv6 in the real world they're scared and confused by concepts, protocols and mechanisms they were never prepared or trained for.


VPS providers such as Linode were around at the time, and they weren't that expensive. $20/mo would have got you enough to get started. Or you just get shared PHP hosting which would have been cheaper or even free (with ads injected). And much simpler to deploy than today, just FTP the files and boom it's live. If you were lucky, your host had cPanel.

What? Have you forgotten what your first time was like?
There is a huge difference between thinking:
> "I build my first shitty website, if i leave my PC on everybody in the world can use it. Who knows what will happen?"
And instead being required to go:
> "I'll spend 20$ a month to maybe entertain a couple of people for a couple of minutes by using someone else's hardware"



Then and now, any second hand hardware will yield more computing power than cheap VMs you can rent. (of course you can rent a dedicated server with 32 cores and 128GB RAM nowadays, this doesn't change that entry level offers are very limited on resources compared to what you can easily find AFK)

It was never a good idea to host a public site on one's personal computer with all the sensitive personal data on it where it could be hacked or DDOSed. Even when IPv4 addresses were easy to get it was a very bad idea.
When you factor buying a separate server, 20$ a month doesn't sound too bad.

Most people getting started are not going to understand systems administration enough to set up a server like that. But they can run software locally.
One of the first projects I was ever a part of was a “WAMP” machine (Windows Apache MySQL & PHP) running Nuke Evolution forum software. I learned a lot from that and would never do it again but it was a useful project for some people and I learned a lot by patching around the source files and learning about MySQL enough to make backups and improvements and so on.
Being able to put up a simple service is but only one of the reasons to be publicly addressable, P2P is also important (things like games, VoIP) but $20 for a VPS is no small barrier.
Not for someone getting started.

You could run it in a VM, which is equivalent to what your 20$ host is doing. Or you could run it on a separate machine. Or you can run it on the same machine which was common back in the day... if you use a reputable distro and apply updates regularly then it's really not a concern (i can't remember myself or anyone i know hacked through vulnerable packages, except for Worpress but that's precisely because it's not packaged by Debian).
> 20$ a month doesn't sound too bad
Doesn't it? I guess it's a matter of age and class and nationality. If you're too young to earn money, it's a barrier. If you're in the lower classes of your country, 20$/month can be a lot (that's like food for 30 days for one person). If you're in a "poor" country (i.e. neo-colony depleted of its resources by global north countries), 20$/month can even be considered a decent monthly income.
> buying a separate server
That's the thing. You usually don't have to buy it. It's old hardware lying around or that someone will donate for the purpose of running fun projects.

I was comparing the payment to buying or operating a server (even a free old server has costs, e.g. for electricity). In truth, a proper modern comparison should be to a free plan from one of the cloud providers which is likely to be 0/month.

This was considered a sophisticated operation because in the late 90s and early 00s basic hosting and email were often included with consumer Internet packages.
Server hacking and DDOSing weren't quite the organised thing they are today.



Yes, we do not have enough IP address for all IoT devices, for all refrigerators and smart bulbs.

Yes, but some things become tricky:
- SMTP reputation is related with reverse DNS of your public IP
- reverse-proxying TLS-encrypted trafic relies on SNI headers, which not all protocols implement
- some protocols entirely don't have a virtualhost (domain) notion, like gopher or SSH
Overall, it's not so easy and simple. Sure i don't care that IoT devices don't have public addresses. To be honest, i'm firmly against IoT as a dystopian nightmare (good luck breaking into your home when your "smart lock" fails). However, public addresses and symmetric bandwidth are very important politically speaking, because they ensure that everyone is given equal opportunity to publish information.
Before the Internet, we had mostly asymmetrical communications. Newspaper required considerable resources to setup, and radio stations were (and still are) government-approved because there is limited channels available... so people could only consume information, not spread it. The Internet did away with this scarcity by having all IP addresses created equal, and everybody having as much upload than download speed (before xDSL).
Internet was the first network where everybody could create content and actually practice what some people call "freedom of speech" for a marginal cost. If you take away public IPs or introduce asymmetric bandwidth, it's not the Internet anymore: you are creating yet another passive consumption network where big corps and nation states tell you what to think.
I personally think asymmetric DSL is the worst that happened to the Internet so far. It's created the idea that there's different hardware and connectivity for clients (we the people), and for servers (fancy machines in datacenters)... two different classes of devices if you will. Nothing could be further from the truth, but this manipulation by the telco industry gave birth to the centralized hosting hellscape (GAFAM) that we know today.

But also those IPs should be available without an entity having to assign.
And not only that but we would also need a free DNS system so you can't be denied host resolving.
And the domain names shouldn't be controlled by someone entity because you can be denied having one or the issuer can withdraw you domain.
Most censorship I've seen was done by DNS filtering. Also some domain names were withdrawn from their owners even if they didn't do something illegal.

I went to the NANOG meeting in October 1997. Many (most?) of the people who were responsible for administering the core routers using Internet Protocol at the time were there. During one talk they were talking about IPv4 running out, and mitigations for this - NAT, dynamic IPs, reclaiming allocated IP space, web servers that could serve multiple domains from the same IP address etc. One questioner went to the microphone and asked in a serious manner, "can't we solve all of these problems by rolling out IPv6?" The entire room broke into laughter.

This sounds interesting. Can you ELI5?

Big research institutions that were present when IP addresses were being allocated got A LOT of IPs by simply asking for them. Apple has the entire 17.0.0.0/8 range. Ford Motor Company has one, the US Gov has a lot [0]. Up until recently MIT had all of 18. (they sold something like half to AWS for a hefty sum not too long ago).
As a student (or visitor), when you joined the network (wired or Wi-Fi) you weren’t allocated some internal IP behind a router but a PUBLIC 18.something that was in the global address space because they had so many IPs available. This meant you could literally host something on the public internet from your dorm room because every device on the network was publicly routable by a unique public IP address.
[0] https://en.m.wikipedia.org/wiki/List_of_assigned_/8_IPv4_add... (see the last section on the original allocation)

As an interesting detail, which seems alien today, is that this was also true at my various employers throughout the 90s. My desktops at work all had public IP addresses and were directly on the Internet, no firewall or anything.
I ran mail and web servers, fully internet accessible, on my work desktops (and lab machines). It was a natural thing to do.



The design works just like postal addressing. Your postal address contains the directions to your building from any location on earth. Even if you live in a dormitory building with many other residents, I can still send you a letter directly by adding "door number: 42" to your dorm's postal address.
IP addressing use numbers instead of English terms like "door" and "street". So I can't simply add "door number" to your building's IP address, your building has to be given enough addresses so each resident's computer can have their own. When your computer has a public IP address, I can send Internet packets directly to you.
Harvard was early to the slicing of the IPv4-address pie, so they had enough addresses each of their residents, including Zuck. Anyone with internet could put Zuck's IPv4 address on an Internet packet and it would end up on his computer. Most of these packets would be HTTP requests to facebook.com, to which his computer would reply with a page from the facebook website.
This is the internet working as intended.
But we ran out of IPv4 addresses in 2012, which has forced internet service providers to adopt an address-sharing scheme called network-address-translation (NAT) that makes it impossible to send letters directly to other people's computers. Imagine I wasn't allowed to put any room number or name on my letters. If I sent a letter to your dormitory, the staff there wouldn't know what to do with the letter and would be forced to return-to-sender or discard it. This is what NAT does, and it has turned the glory of the Internet into a centralized monster of control and censorship.
If you want to host a website with a public IPv4, only established cloud providers that obtained enough IPv4 addresses before it was too late can help you (primarily Amazon, Google and Microsoft).
The successor of IPv4, IPv6, brings enough address space for every person, their dog, their dog's fleas, and their dog's flea's microbes. We can go back to hosting websites from our dormitories, sending chat messages directly to our friends (not via Google, Facebook and Microsoft), and start new ISPs that missed out on the IPv4 pie that actually have a chance at competing with the likes of Comcast.
IPv6 reintroduces equity to the internet that facebook benefited from in it's inception.

the end-to-end principle is mostly undermined by stateful firewalls and a total lack of secure-by-design in software developement, this will not change with ipv6

Except for the fact nobody can type, much less remember any IPv6 address.

rfc1918 address space is easily remembered because people use mostly 192.168.xx.xx. but ipv6 has the same idea and when writing it shorthand isnt significantly larger.

Obviously doesn’t scale, but I would assume this was normal back when you only interacted with say <10 servers.

You basically just need a router and an OS from the last two decades and your machines to have a defined host name (which your OS installer takes care of).

I'm looking forward to using `router.local` over `192.168.1.254`.

There was a time when the Internet was not divided between producers and consumers of content, but everyone was an equal netizen with publishing capabilities. Then came asymmetric connections, and datacenters, and the modern hellhole we all know to well.
It's never too late to act: many "community networks" are doing an amazing job to promote selfhosting and hosting cooperatives.


e.g. Isolating subnets and restricting outbound access. Seems like a useful defence-in-depth mechanism in case of misconfigured firewall rules.


NAT has become too complex and most consumer versions of it are developed for ease of use over security. Don't trust NAT to protect your network, because the device doing NAT in your home network most likely wasn't developed to use it as a security measure.



No, it's entirely unrelated to NAT. That's a consequence of the firewall on the router.
IPv6 doesn't get rid of the firewall.





Allowing “ownership” of such resources just leads to rent-seeking.


In the case of Delta Airlines and Delta Faucets for example who would get to have "delta.com"? Then what about all the other countries with independent trademark rules and authorities?
I like the idea, but it would end up being way more complex than it seems.




OTOH there would probably just be “not squatting” services just like there were/are “under construction” landing pages.

Incorporating a time dimension to domains, such that it's explicitly recognised, and greatly restricting transfers, would be one element.
Ownership of domains should sit with the registrant, not the registrar.
Characterspace should be explicitly restricted to 7-byte ASCII to avoid homoglyph attacks. It's not a shrine to cultural affirmation, it's a globally-utilised indexing system, and as such is inherently a pidgen.
Other obvious pain points:
- BGP / routing
- Identity, authentication, integrity, and trust. This includes anonymity and repudiation. Either pole of singular authentication or total anonymity seems quite problematic.
- Security. It shouldn't have been permitted to be baked on.
- Protocol extensibility. Standards are good, but they can be stifling, and a tool for control. Much of the worst of the current Internet reflects both these problems.
- Lack of true public advocacy. It's deeply ironic that the most extensive and universal communications platform ever devised is controlled by a small handful of interests, none answerable to the public.
- More address space. IPv6 is a problematic answer.
- Better support for small-player participation. Servers are still highly dependent on persistence and major infrastructure. A much more robust peered / mesh / distributed protocol set that could be utilised without deep expertise ... well, it might not make things better, but we'd have a different problem-set than we face presently.
- Explicit public funding model for content, and a ban or very heavy tax on most advertising.
- A much keener, skeptical, and pessimistic early analysis of the interactions of media and society.
Imagine how many times security and privacy have been reimplemented in different contexts.
And that patchwork approach will incentivize security breaches and manipulation through dark surveillance until ... no end in sight.


Although even non authenticated encryption is nice to prevent passive eavesdropping.

Imagine a world without DDoS or Cloudflare's 3+ second redirect.

What's to stop a malicious actor from just ignoring the refusal?


Elements that know they're user-editable (images with upload controls attached to them to replace the image that's there, dates that trigger calendar controls when clicked, and inline editing for other elements that actually works and is consistent across browsers).
An offline database in the browser that has the concept of user accounts and resetting passwords baked in, as well as subscription payments, and has a universal API that can be automatically deployed to any hosting platform.
All of this would make building web apps and games trivial for so many more people -- write some basic HTML, upload it to one of a million platforms of your choice with a click, and you have a testing ground for a product that can grow.
It would be a way for anyone on the internet to build something useful and get it out there without learning a thousand technologies with a thousand nuances.

DNS is a horrid mess that should have been designed with ease of reading in mind. And I know that DNS was designed way before it, but I think a data transit format similar to JSON would have made the system a bit more extendable, and given people a chance to make it a more dynamic system.
E-Mail was brilliant in it's time, but for the love of all things explosive and holy is it bad. Just the fact that in it's base design there is no E2E encryption going on is a problem.
My biggest beef with the current internet is HTTP. Everything is built on it, and it isn't the greatest system. There have been so many systems and protocols implemented that did so many things, FTP/gopher/irc/etc, and most of them have gone the way of the dodo. A few hold-outs in the dedicated tech world will still use irc, but we could have done so much with cloud-based systems with FTP. And if we had a new spec for irc, would we need Slack/Discord/MS Teams/etc? They could then all talk to each other. We shouldn't be trying to reinvent the wheel, we should be using these older services in our platforms.
And don't get me thinking about cloud. The worst term that a marketing team got a hold of. At it's core, it's just somebody else's computer. And again, so much of it is build on HTTP protocols. Not many people know or remember that xWindows and X for *nix systems had a distributed system built in. Log into a server, set one environment variable to your IP address as long as you were running one of these systems yourself, and you could runs programs on the server with the GUI on your computer.

I wish we could make new protocols at all.

The reason we have all those separate systems is not that there are no alternatives: irc could have evolved with a new spec, but there is also XMPP (Jabber)... The reason is that all those systems like Slack/Discord/MS Teams do not interoperate with each other is that they are developed by companies that need to make money, and they want to force and keep users on their systems.
I think email is the only communication protocol that is still very popular and works across providers. I don't think it will disappear anytime soon. At this point, email providers cannot lock their users into their own system: no one can imagine that you'd be only able to email other gmail accounts from a gmail account or other microsoft accounts from a microsoft account.

this is mostly an artifact of how most firewalls are configured to only allow "neccesary" stuff; this also applies to ipv6 and hence all dreams of it re-enabling end-to-end connectivity are kinda moot.


email is a nice way of addressing, but we underuse that address compared to all the balkanized post-email systems.

https://delta.chat/en/ implements encrypted IM over email.
https://jmap.io/spec-calendars.html is the calendaring spec for IMAP's likely successor
https://webfinger.net/ is the (appallingly named) http protocol for finding named people such that social connections can be made and managed. Some extensions exist for services that commonly run with email that provide webfinger services.

Indexing and page interoperability is done by exposing standard functions which yield the necessary metadata. For example, if you want your site to be indexable by a search engine, you expose a function "contentText()" which crawlers will call (and which the client browser might also call and display using the user's own reader app). In the simplest case, the function simply returns a literal.
Core resources like libraries would be uniquely identified, cryptographically signed, versioned, and shared.
If someone wanted to make use of a browser pipeline like the standard one we have today, they might send code which does something like "DOMLibrary.renderHTML(generateDocument())". But someone else could write a competing library for rendering and laying out content, and it might take hold if it's better, and it wouldn't have to be built on top of the first one.
Also, the browser wouldn't necessarily be a separate app (though someone could make one); it would be a standard built-in feature of operating systems, i.e. the sandboxing would be done at the kernel level. With security properly handled, there'd be no difference between a native app and a web app, except whether you load it from your own filesystem or a remote one.



Same for copying text.
Advantage: you can also select text in images, and block ads that are images.



But the problem is that HTTP is more suited to deliver HTML and browsers were designed primarily to render HTML.

Edit: I think this would result in protocols over walled gardens. The problem is JS makes HTTP/HTML everything to everyone.

The alternative to JS is not "no scripting", it's websites that only function with proprietary plugins installed.

Flash got axed because the way it was implemented into clients was wasteful, insecure and a bad user experience in general. Same with Java; you can't make the same tools that Java applets once provided through the browser because of the browser's security model, so I'd say the two never competed.
Making a quick interactive animation that works across all kinds of resolutions and sizes was trivial in Flash, but in HTML5 this is a challenge. HTML5 and friends fixed the problems that made web developers grasp at Flash for, like file uploads, animations and predictable interaction with the mouse and keyboard. These features were often already possible in browsers, but Flash was the only tool that made them appear the same in every browser available. They didn't replace the flash scene for games, interactive simulations and other online experiences. The Flash game scene pretty much died when Flash started getting blocked by Chrome, with only a small subset of the community fragmenting and finding their way to frameworks like Unity Web and its then plentiful competitors with high learning curves because they were designed for the "real deal" game developers rather than self-taught animators that turned game dev.
I think JS is a necessity for the web to exist today, but we need an alternative for what once was Flash, Java, Shockwave and more. Too many features have been shoved into HTML and Javascript that have no business there, left to be abused by trackers and hackers alike.

Should MS Word and Adobe PDF also execute code? Should I only be able to run an application only by executing it in MS Word and Adobe PDF reader?

Oh boy do I have news for you! https://helpx.adobe.com/acrobat/using/applying-actions-scrip...
Not sure about Word as I barely know it, but I'm sure you can execute some sort of code. If not JS, probably VBA or other Microsoft language.


I'm confused by this comment - HTML5 does do all that out of the box. Are you saying it should have done it from the start?
I don't get the bit about JS being required to render static content either...


Yes and those were websites that for the most part nobody gave a crap about. The important sites weren't willing to dump so many of their viewers. Heck, even post-JS, the web didn't suck so badly until the option of shutting off JS was removed from browser UI's. Before that, enough users shut off JS that sites wanting wide audiences had to be able to function without it.


https://stackoverflow.com/questions/6355300/copy-to-clipboar...




I don't see how it would. I think it would have led to something equivalent to JavaScript, because that's exactly the route we took to get to here. The WWW started as just documents, and there were plenty of protocols for other internet things. Businesses and consumers (and almost everyone else) want more than documents, and avoiding requiring people to install and run a separate client for every purpose (and requiring developers to build said cross-platform clients) led to plug-ins and then capable JavaScript.

Eventually, someone would try to ship everything to everyone through the internet, and they'll figure out a way. It's all just byte streams anyway. Perhaps something like java applet or Flash, or some worse version of "Click this to install our plugin".


Make it easier and more equitable to obtain addresses and ASNs.
Build protocol to make edge/cloud computing more fungible. Similar to folding@home but more flexible and taking into account networj connectivity rather than just CPU cycles. Probably looks a lot like Cloudflare Workers/Sandstorm but no vendor lockin.
DNSSEC only. TLS only for HTTP/consumer facing web.
Actually on the topic of that probably something simpler than TLS to replace TLS. It has too many features. x509 can stay but the protocol needs reworking for simplicity.

For example: a public index service (akin to DNS) where all pages upload all hyperlinks they were using. The end result is a massive graph that you can do PageRank on. You'd have to add some protections to avoid it getting gamed...
Email was the first decentralized social network and with it came bulletin board services and groups. Could these concepts have been developed a bit further or been a bit more user friendly while remaining decentralized?

Build in a layer for onion routing as well, so that all the servers in the middle don't automatically know who you're trying to reach.

* A protocol for running distributed binary apps. Transforming the browser in an operating system sucks for both users and developers.
The situation today of course is that the method that usually comes to mind when a person decides that something he or she has written should be put on the internet publishes what is essentially an executable for a very complex execution environment, and (except for a single non-profit with steadily decreasing mind share) the only parties maintaining versions of this execution environment with non-negligible mind share are corporations with stock-market capitalizations in the trillions.

Typical page size would go from 1M typical to 64K-128K being typical. Images would stream in after initial page renders but since most pages would fit in 1-3 packets, you'd see pages pop-in very quickly. This would also be very helpful for poor connection, mobile and the developing world.
I'd fund a team to do this if I could figure out who would buy it.

Take amazon landing as an example:
* HTML/CSS: 80 KB and 88 KB
* Images: 3.2 MB
* Font: 180 KB
* JavaScript: 260 KB
Reducing HTML/CSS to 1 Byte each, would make nearly no difference, because huge parts of pages are in images.

64-128K page size is perfectly doable in HTML today if anybody actually cares enough to do it. How much of the bloat in say an Amazon page is the JavaScript?
Is there something about your protocol that would prevent the big players from re-normalizing 1M page size with it?


I would also describe SVG as a very verbose instruction set rather than a reduced instruction set.

2. SRV RR's instead of "well known ports". Solves load balancing and fault tolerance as well as allowing lots of like-protocol servers to coexist on the same IP address.
3. Pie-in-the-sky: IPv4 semantics w/ a 96-bit address space (maybe 64-bit).
I tried to learn it a while ago and got super frustrated with how things are. The whole thing looked upside-down to me.
I mean The DKIM, SPF, DMARC, Bayesian filtering, etc sound like band-aids upon band-aids to fix something that's really broken inside.
We should have had DHCP prefix delegation for IPv4 so people wouldn't need NAT.



That's not the worst idea I've ever heard.

I'd go source-routed isochronous streams, rather than address-routed asynchronous packets.
I haven't updated my blog in a few years, but I'm still working on building the above when I have the time. (IsoGrid.org)

ISPs should simply check where the UDP traffic is coming from, and filter out packets that have a different UDP source address inside them.
This would literally make the internet DDoS free.

https://en.wikipedia.org/wiki/Low_Orbit_Ion_Cannon
Amplification makes a DDoS bigger, but isn't what makes it a DDoS.

I know, but large botnets like MIRAI and similar usually take advantages of how UDP flooding works, because TCP RST packets don't scale as well.
I was just mentioning that because ISPs could easily get rid of most UDP-based DDoS attacks if they would watch their networks' packet sources (what they're doing anyways).
Usually it's literally just a comparison of IANA assigned IP ranges (of source+victim being inside or outside), and if they're mismatching, it's very likely to be a UDP flooding attack.

if the current tech is so messed up to warrant a re-build, why would one expect that all the requirements are properly captured?
having said that, stepping back and articulating the existing problems could be a viable first step. Identify and scope the problems before dreaming of solutions. /product engineer and party pooper hat off
But a decentralised way to do an internet search. So an anbiased/no tracking search engine
As for security, I'd enhance the web browsers to do internet hygiene - removing unwanted bits and blocking information leaks. The browser should protect user accounts from being linked together.
Another major issue is payments which today is a high friction backwards process and acts like a gatekeeper. We need an equitable solution for micro-payments without too large processing costs.
The last one is search - users can't customize the algorithmic feeds it generates and startups can't deep crawl to support inventing new applications. Search engine operators are guarding their ranking criteria and APIs too jealously while limiting our options.
So in short, ipfs/bittorrent like storage, nanny browser, easy payments and open search.
Still an unresolved issue to this day AFAIK.


I think the real challenge to implementing this kind of thing is preventing abuse.
If servers were to accept subscriptions on resources under their purview in the form of an arbitrary URL to poke when there's an event, it's ripe for abuse without some additional steps. But I think something along the lines of what letsencrypt does would be sufficient; make the subscriber prove control over the domain to be poked by asking it to reply with some gibberish at a specific url under its purview before accepting the subscription at that domain, and you'd throttle/limit those operations, just like letsencrypt does. At least that way you're not turning subscriptions into DDoS attacks via notification events sent to unsuspecting domains...
[0] Tussle in Cyberspace: Defining Tomorrow’s Internet https://groups.csail.mit.edu/ana/Publications/PubPDFs/Tussle...


E.g. for ssh, there's mosh (uses UDP to work around the control part of TCP).
Come to think of it -- everything I use is either stateless (http), or can use ssh for transport (ssh, sftp, sshfs, rsync, ...).
I'm sure some other protocols still break, but I haven't felt that pain in years!
Protocols like DNS and SMTP are designed so that multiple servers that can handle traffic, and one going down isn't a big deal - the clients will just retry using and the entire system just keeps working.
Compare to HTTP which don't have retry mechanisms which results in needing significant engineering to deal with single points of failure, fancy load balancers and other similar hard engineering challenges. Synchronization of state across HTTP requests would still be an issue, but that's already a problem in the load balancer case, pushed usually to a common backing database.



I don't see how would that preclude strong authentication. We would need TLS all the same.
And making more top level domains available from the outset instead of the fetishisation of the .com domain.
E2EE for all major protocols from the start (DNS, HTTP, SMTP, etc)
Protocols for controlling unwanted email and analytics (a not disastrous version of EU’s cookie consent)
Sounds elevating... P-:
Disclaimer: 'This position is constructed to deal with typical cases based on statistics, wich the reader may find mentioned, discribed or tendency-based elaborated.'
Sure, OT -but you may also like thinking other way about (-;



It is saner to make a fast network, and then layer an anonymous one on top of it.


You can geolocate any fast internet connection by measuring its latency to various points on Earth.

How would you be able to differentiate 20 degrees east of the server with 20 degrees west of the server?


While the spam problem is much better than it used to be, that's because there's a whole lot of behind-the-scenes (and expensive) infrastructure devoted to combating it.
That infrastructure has also made it considerably more difficult to run your own mail server. While you can still do it, there are many hoops to jump through if you want to keep your mail from being dropped on the floor by the first overzealous spam filter it encounters. Unless you're big enough to devote a staff just to that (or you have a lot of free time) it's easier (but more expensive) to just use MailGun, MailChimp, or one of their brethren.
I would guess that the annual cost of spam is somewhere in the billions.

Thanks to VoIP, we've seen a surge in phone numbers being spoofed with robocalling. I'd like to see some form of authentication applied to this and to email.
Does it really need to take 5 seconds to load a website whose contents total 500kB on a 100Mbit/s connection with a 6-core CPU?
I think we could accomplish something similar just by having a global SSO that works for any website, totally decoupled from your actual identity or location. The only information it reveals might be some sort of quantifiable social standing based on how you interact with others on forums.


Not trying to be glib, but it's a political and law enforcement necessity that not every Tom, Dick, Harry, and Jane be entry points to the financial system.
I'm not saying I agree with the justification for it, but as someone who has seen how it works... It is alas, an incompatible outcome.
I would eliminate third party requests for page assets. If you need images, css, js, or such it would come from the same origin as the page that requests it.
1. This would eliminate third party tracking
2. This would eliminate abuse of advertising
3. This would eliminate CDNs and thus force page owners to become directly liable for the bandwidth they waste
4. It would make walled gardens more expensive
I would also ensure that section 230 is more precisely defined such that content portals are differentiated from the technology mechanisms on which such traverse. The idea here is that Section 230 continues to protect network operators, web servers, and storage providers from lawsuits about content but not the website operators that publish such.
1. Sites like Facebook and Twitter would become liable for their users submissions regardless of moderation or not
2. Terms of service agreements would largely become irrelevant and meaningless
3. This would radically increase operational risks for content portals and thus reinforce content self-hosting and thus a more diverse internet
I would ensure that identity certificates were free and common from the start.
1. The web would start with TLS everywhere.
2. This would make available models other than client/server for the secure and private distribution of content
3. This would, in joint consideration of the adoption of IPv6, also eliminate reliance upon cloud providers and web servers to distribute personal or social content

No thanks.
I recall a possibly apocryphal story about TCP - namely that it was originally meant to be encrypted. Supposedly it was the NSA who had gentle words with implementors which led to the spec not including any mechanisms for encrypted connections.
So, encrypted by default TCP, for a start.
DNS should have a much simpler, possibly TOFU model to help with the usability elements. DNSSEC is just a nightmare and confidentiality is nonexistent.
Somewhat controversially, I'd ditch IPv6 in favour of a 64bit IPv4.1 - blasphemy, I know, but the ROI and rate of adoption for IPv6 don't justify its existence, IMO.

If it was possible to have infinite radio frequencies to use. Then you could load every possible webpage of every website continuously broadcast on a different frequency. To load a page all you have to do is tune in to that frequency. You will then get an instant latest version of that site without having to wait. This gets more complicated for signed in websites. There is no reason you couldn't implement the same thing just add more frequencies for more users. This wouldn't work for POST requests, but I think any GET request would work fine.

For one thing, operating systems' tcp/ip stacks. They should have come with TLS as a layer that any app could use just by making the same syscall they use to open a socket. For another, service discovery should not be based on hardcoded numbers, but querying for a given service on a given node and getting routed to the right host and port, so that protocols don't care what ports they use.
Domain registration and validation of records should be a simple public key system where a domain owner has a private key, and can sign sub-keys for any host or subdomain under the domain. When you buy the domain you register your key, and all a registrar does is tell other people what your public key is. To prove you are supposed to control a DNS record, you just sign a message with your key; no more fucking about with what the domain owners email is, or who controls at this minute the IP space that a DNS record at this minute is pointing to. This solves a handful of different problems, from registrars to CAs to nameservers and more.
The security of the whole web shouldn't depend on the security of 350+ organizations that can all independently decide to issue a secure cert for your domain. The public key thing above would be a step in the right direction.
BGP is still ridiculous, but I won't pretend to know it well enough to propose solutions.
Give IPv4 two more octets. It's stupid but sometimes stupid works better than smart.
Give the HTTP protocol the ability to have integrity without privacy (basically, signed checksums on plaintext content). This way we can solve most of the deficiencies of HTTPS (no caching, for one) but we don't get MitM attacks for boring content like a JavaScript library CDN.
And I would make it easier to roll out new internet protocols so we don't have to force ourselves to only use the popular ones. No immediate suggestion here, other than (just like port number limitations) it's stupid that we can't roll out replacements for TCP or UDP.
And I would add an extension that encapsulates protocol-specific metadata along each hop of a network. Right now if a network connection has issues, you don't actually know where along the path the issue is, because no intermediate information is recorded at all except the TTL. Record the actions taken at each route and pass it along both ways. Apps can then actively work around various issues, like "the load balancer can't reach the target host" versus "the target host threw an error" versus "the security group of the load balancer didn't allow your connection" versus "level3 is rejecting all traffic to Amazon for some reason". If we just recorded when and where and what happened at each hop, most of these questions would have immediate answers.




Blockchains in a way are such clocks where the ordering of the blocks on the chain provide an implicit counter. But blockchains have too much overhead so by assuming some trust it would be possible to get rid of the proof of work mechanism and just use a signed timestamp from a trusted source.



Certainly, things like free speech, the right to a fair trial, privacy and anonymity also somewhat benefit "bad people". But unless you live a very boring life indeed, from time to time perhaps you will be the one who is going against the grain of society. (Or some small but significant segment thereof). And then you'll be glad that you have a safe harbor to fall back on.

And need some sort of identity management for all the automated processes to assume, which would be a huge nuisance.

What makes you think so?

Next, we use 128 bit IP addresses, 32 bit port and protocol numbers, and absolutely forbid NAT as a way of getting more addresses. (no ip address shortage)
Next, all email has to be cryptographically signed by the sending domain using public key encryption. (No more spam)
Next, selling internet access [EDIT]ONLY is strictly prohibited. Either connections can host servers, or find a different business model. Any node of the internet should be able to run a small server. (No more censorship in the walled gardens) Yes, I know it's stupid to try to host a server on the upload bandwidth I get from a cable modem, but it shouldn't be prohibited. If I want to see my webcams from anywhere, it shouldn't require someone else's server.
DNS should be done using a blockchain, it would need some iteration to get it right.

Not sure if this is attainable, someone has to draw the cables and maintain them. What do you propose, that everyone does this for themselves? I don't think we'll be seeing much fiber cable, nor internet users then. What with trans-oceanic cables? In the end someone's got to pay the bill.
I do agree however that DNS should be free and decentralized. The difficulty here would be to avoid domain hoarders. Maybe through some verification / vouching system? On the other hand that seems a lot like PGP keyservers, which also didn't turn out all too great.
All in all, difficult problems. Maybe the current system isn't all too bad after all?


How does this work? Would every email address need to get a public/private key? From where? Does someone get to control who can use email? How much does it cost? Do the certs expire? How do we manage the system for billions of email addresses?
I ask this with genuine interest - I'm not sure how a cert-required email system would work - or how it would help with spam...

We'd have to agree on standard protocols, but all domains would manage their own keys, nothing would be centralized past the already centralized DNS system.

e.x. No need to prohibit NAT if there are more than enough IPs to go around.

Search:
Recommend
-
7
TechFacebook is testing drastic changes to Instagram to make it more like TikTokPublished Wed, Jun 30 20212:41 PM EDTUpdated 4 Hours Ago...
-
14
Articles: Would you make your art if you were the last person on earth? 2019-09-25 Musicians, photographers, writers, and artists of all sorts: Would you...
-
11
What Would You Do if You Were Rich?Let’s do a thought experiment
-
6
4 small changes would bring big improvements to iPad Pro
-
8
The 3 Biggest Changes On FaceCode Since You Were Last Here ...
-
5
AirPods Were Released Six Years Ago Today With 'Revolutionary' User Experience MacRumors ...
-
6
If software engineering roles were chess pieces, what would they be? Monday, February 27, 2023 Chess is booming, and tech is burning to the ground. It's inevitable, soon, that Chess is going to...
-
8
Major Changes Ahead: Redesigned Apple MacBook Air Is Coming Soon! Manage Cookie Consent
-
5
U.S. Chemical Plants Would Have to Make Big Changes Under New EPA Rule
-
0
Android How 5 apps were redesigned for foldable phones ...
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK