8

Ask HN: If the Internet were redesigned today, what changes would you make?

 3 years ago
source link: https://news.ycombinator.com/item?id=29053266
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client
Ask HN: If the Internet were redesigned today, what changes would you make? Ask HN: If the Internet were redesigned today, what changes would you make? 119 points by flerovium 11 hours ago | hide | past | favorite | 229 comments I mean the protocols, networking, connectivity. I don't mean the content of the internet.

Is DNS really a perfect protocol? How can it be improved?

IPv6 dates back to 1997 and it really should have been adopted more urgently. IPv4 isn’t a huge issue but it sucks that so much of the internet is dependent on cloud providers because it’s the simplest way to get a public IP address. The decentralized web didn’t happen, in part, because of this.

Facebook was famously started and hosted in a dorm room. But this was only possible due to the history of Harvard within the advent of the internet and the fact that they had such an excess of addresses that Zuck could bind to a public IP address. We’ll never know what tiny services could have blown up if people didn’t hit this wall.

I started off with computers by hosting garrysmod servers. My brother started off with computers by hosting a website dedicated to the digital tv switchover in Wisconsin (lol). This was only possible because my dad was a software engineer and paid a bit extra to get us 5 dedicated IP addresses. If he didn’t understand that, who knows what me or my brother would be doing today.

Anyway, I say IPv6.

s.gif
> Facebook was famously started and hosted in a dorm room. But this was only possible due to the history of Harvard within the advent of the internet and the fact that they had such an excess of addresses that Zuck could bind to a public IP address.

I'm fairly certain the version of Facebook that was hosted from Zuckerberg's dorm room was just for Harvard students, and wasn't accessible from outside the campus network. Keep in mind that early FB was rolled out to only select universities on a campus-by-campus basis over the course of a year or two; it wasn't like it was today. Part of the whole appeal of FB early on was its exclusivity.

There were and are lots of places with routable IPv4 addresses that still have various kinds of traffic management and firewalling. My uni handed out real IPv4 addresses in the early 2000s (may still today!), but absolutely didn't allow inbound connections from anywhere outside of the campus network, at least not on well-known ports. You could (and lots of people did) run a server, SMB or AppleTalk file share (so much porn...), etc., but it wasn't accessible to the entire Internet. (Hotline and Carracho servers, OTOH...) I would be absolutely astounded if Harvard didn't have some inbound filtering on its network at the time; keep in mind this was 2004: peak Windows XP era... students would have been getting hacked left and right if they hadn't.

There are still some big companies around with very large IPv4 allocations for historic reasons (HP has at least two /8s I believe, its own original one plus one acquired from DEC; IBM has at least one; Apple has one, etc.) and some of them use routable addresses internally. I know IBM did this in its major offices in the mid to late 2000s. But you couldn't just spin up a server at your desk and hit it from home without going through IT and having them put in a firewall rule for you. This was all pretty standard network security stuff at that point.

s.gif
Nearly all of your text about early Facebook is incorrect.

It was originally hosted on Harvard's servers, and lasted only a few hours before the administration pulled the plug on facemash, which was basically a 'hot or not' clone.

Then they rented a server for $85/mo. and launched thefacebook.com a few weeks later.

Both sites were on the public internet.

https://www.fastcompany.com/59441/facebooks-mark-zuckerberg-...

s.gif
I mean why not dream bigger. IPv6 is a mess, the absolute definition of second system syndrome.

If I could fix anything, it would be IPv6 itself. The biggest thing preventing it's widespread adoption is it's complicated nature.

An IPv4 with an extra octet or two would have seen complete adoption years ago.

s.gif
Yep, IPv4.1 with 128-bit (or whatever size; I'm not sure we really need 64-bits of local address in a subnet, but ok) addresses, none of the things that didn't work well by the mid 90s; no source routing, no router based fragmentation (maybe router based truncation instead, but maybe that's also too much to ask), no checksum or reduce checksum to not include ttl since it changes every hop, no optional fields (protocols can still have options), etc.

No new/subsummed functionality. Stick with ARP and DHCP, etc.

A better / more actionable upgrade path and plan to get things working so most devices could go v4.1 only quicker, etc.

s.gif
Once you make the addresses longer you need to replace all the same hardware and come up with 80% of the same problems of the IPv6 migration
s.gif
how is ipv6 more complicated then ipv4? the friction comes from having to manage two different network stacks at once, not the so called complexity of ipv6.

ipv4 is a mess in many ways, ARP for instance, is a disaster once you reach a certain scale and many of the ipv4 header fields are unneeded leading to inefficiencies. multicast on ipv4 is a mess, rfc1918 is a neat idea but ipv6 fixes it in a far better way and gets rid of NAT in the process. (and no, NAT does not increase security, ipv6 still has firewalls !)

in my opinion, ipv6 is the far simpler protocol, it is the migration from ipv4 to ipv6 which is resulting in all this complexity, but it is not the fault of ipv4.

and before people ask why not just extend the address space... that only solves half of the problems ipv4 has and not to mention, results in having the same dual stack situation as we have right now. extending address space is simply not possible in a backwards compatible way. and if we have to break compatibility, we might aswell fix all the other issues with ipv4.

s.gif
Most people don't care about ipv4 problems, it works. Does ipv6 solves ipv4 problems such as impossibility to extend it in backwards compatible way? Say, I want 256 bit address space... can IPv6 be extended in that way?
s.gif
So glad I'm not the only one that has thought this. Every time an IPv6 discussion comes up I think to myself "why couldn't we just make IPv4 addresses longer".

IPv6 tried to solve too many problems at once. We should've just focused on solving the address exhaustion issue. Instead, we have a very slow roll out of IPv6, and awful stuff like CG-NAT taking permanent hold.

s.gif
This is a fairly common opinion, FWIW, though sadly not one widespread enough to be zeitgeist. There were some really great article explanations of this that I am not finding right now as I am not allocating enough time to it, but like here is some other commentary from Hacker News saying the same thing.

https://news.ycombinator.com/item?id=17344911

s.gif
God, IPv6 got the shaft hard. People complain that it's too complex and that it solves too many problems at once, but that's because they've become used to the ancient stack of random protocols that make IPv4 work. There are also a lot of people who have it out for IPv6 because their ISPs aren't handing out static addresses, somehow equating IPv6's problems with the shittiness of their ISP.

It's not widely implemented because hardware support is lagging behind (Ubiquity being a notorious example in the prosumer space), hardware support isn't being developed because it's not rolled out widely, software support is lacking because of a lack of rollout which is then used as an excuse not to roll out IPv6.

I bet that if people learned IPv6 before they learned about IPv4 the conclusion would be that IPv4 is a mess. In my opinion, DHCP is a stupid protocol for assigning addresses that shouldn't have been necessary, but we've managed to staple some kind of management ideals on top of it (as if someone couldn't just set a static IP on their device) and using SLAAC feels like giving up control for some. Imagine trying to convince people that they have to set up a USBCP server on either their computer or their flash drive to make USB work without address conflicts, or to make Bluetooth work, they'll laugh at you and ask why that stuff isn't done automatically by the underlying protocols instead. DHCP is useful for many other settings, but address negotiation should've never been a problem it needed to solve in the first place.

We've accepted NAT as a fact of life because of ISPs being stingy to hand out addresses years ago when multiple devices started appearing in home networks and now people treat it like some kind of firewall (which it usually isn't!) or absolute necessity because they can't imagine something else.

Ask any console player about what type of NAT they have (NAT type 0? Type 1? Type 2? open? moderate? strict? I've never been able to figute out what these classifications even mean on a technical level!) and they'll shudder with flashbacks of getting basic connections to work with their crappy ISP router. This should never have been a problem, but everyone kept dragging their feet and eventually we decided to accept this mess.

I think part of the reason is that many schools still only teach IPv4 in their networking classes, so when people encounter IPv6 in the real world they're scared and confused by concepts, protocols and mechanisms they were never prepared or trained for.

s.gif
> DHCP is a stupid protocol for assigning addresses that shouldn't have been necessary,
s.gif
> the fact that they had such an excess of addresses that Zuck could bind to a public IP address

VPS providers such as Linode were around at the time, and they weren't that expensive. $20/mo would have got you enough to get started. Or you just get shared PHP hosting which would have been cheaper or even free (with ads injected). And much simpler to deploy than today, just FTP the files and boom it's live. If you were lucky, your host had cPanel.

s.gif
> $20/mo would have got you enough to get started.

What? Have you forgotten what your first time was like?

There is a huge difference between thinking:

> "I build my first shitty website, if i leave my PC on everybody in the world can use it. Who knows what will happen?"

And instead being required to go:

> "I'll spend 20$ a month to maybe entertain a couple of people for a couple of minutes by using someone else's hardware"

s.gif
if your PC only consumes 50w (somewhat low for a desktop). and your power is 20c a KWH (a little high on average, but not crazy high, though perhaps 0 in a dorm room). you'll be spending $7-$8 a month just on electricity. So it's not "free" to run it on your PC. now, if you view that your PC runs 24/7 anyways, perhaps (much like the dorm room case) the marginal cost is 0. I'm simply making the case that it might not be, and it could very well be a significant %age of the linode cost.
s.gif
But it feels basically free which is the important thing vs having to set up billing. $20 a month is a lot if you're a teenager with limited or no income.
s.gif
It's not entirely free, but the cost is shared with other usage of your computer, and you have much more resources at your disposal.

Then and now, any second hand hardware will yield more computing power than cheap VMs you can rent. (of course you can rent a dedicated server with 32 cores and 128GB RAM nowadays, this doesn't change that entry level offers are very limited on resources compared to what you can easily find AFK)

s.gif
There's a huge difference alright.

It was never a good idea to host a public site on one's personal computer with all the sensitive personal data on it where it could be hacked or DDOSed. Even when IPv4 addresses were easy to get it was a very bad idea.

When you factor buying a separate server, 20$ a month doesn't sound too bad.

s.gif
I’d argue that it’s a huge barrier to innovation.

Most people getting started are not going to understand systems administration enough to set up a server like that. But they can run software locally.

One of the first projects I was ever a part of was a “WAMP” machine (Windows Apache MySQL & PHP) running Nuke Evolution forum software. I learned a lot from that and would never do it again but it was a useful project for some people and I learned a lot by patching around the source files and learning about MySQL enough to make backups and improvements and so on.

Being able to put up a simple service is but only one of the reasons to be publicly addressable, P2P is also important (things like games, VoIP) but $20 for a VPS is no small barrier.

Not for someone getting started.

s.gif
> It was never a good idea to host a public site on one's personal computer

You could run it in a VM, which is equivalent to what your 20$ host is doing. Or you could run it on a separate machine. Or you can run it on the same machine which was common back in the day... if you use a reputable distro and apply updates regularly then it's really not a concern (i can't remember myself or anyone i know hacked through vulnerable packages, except for Worpress but that's precisely because it's not packaged by Debian).

> 20$ a month doesn't sound too bad

Doesn't it? I guess it's a matter of age and class and nationality. If you're too young to earn money, it's a barrier. If you're in the lower classes of your country, 20$/month can be a lot (that's like food for 30 days for one person). If you're in a "poor" country (i.e. neo-colony depleted of its resources by global north countries), 20$/month can even be considered a decent monthly income.

> buying a separate server

That's the thing. You usually don't have to buy it. It's old hardware lying around or that someone will donate for the purpose of running fun projects.

s.gif
>Doesn't it? I guess it's a matter of age and class and nationality. If you're too young to earn money, it's a barrier. If you're in the lower classes of your country, 20$/month can be a lot

I was comparing the payment to buying or operating a server (even a free old server has costs, e.g. for electricity). In truth, a proper modern comparison should be to a free plan from one of the cloud providers which is likely to be 0/month.

s.gif
I have a friend who was running a successful dating site over ADSL from a spare PC in a spare bedroom in London. This was in the early 2000s. She sold it a few years later for £££££££s.

This was considered a sophisticated operation because in the late 90s and early 00s basic hosting and email were often included with consumer Internet packages.

Server hacking and DDOSing weren't quite the organised thing they are today.

s.gif
When I was a teen I could definitely not afford 20$ a month on something like that. I could barely afford my MMO subscription!
s.gif
ipv6 easily could have been the extended ipv4 but drank second-system syndrome in full and now pays the price
s.gif
You can host more than one domain per ip. And most hosting providers, not just cloud providers do also offer packages with a static IP address.

Yes, we do not have enough IP address for all IoT devices, for all refrigerators and smart bulbs.

s.gif
> You can host more than one domain per ip.

Yes, but some things become tricky:

- SMTP reputation is related with reverse DNS of your public IP

- reverse-proxying TLS-encrypted trafic relies on SNI headers, which not all protocols implement

- some protocols entirely don't have a virtualhost (domain) notion, like gopher or SSH

Overall, it's not so easy and simple. Sure i don't care that IoT devices don't have public addresses. To be honest, i'm firmly against IoT as a dystopian nightmare (good luck breaking into your home when your "smart lock" fails). However, public addresses and symmetric bandwidth are very important politically speaking, because they ensure that everyone is given equal opportunity to publish information.

Before the Internet, we had mostly asymmetrical communications. Newspaper required considerable resources to setup, and radio stations were (and still are) government-approved because there is limited channels available... so people could only consume information, not spread it. The Internet did away with this scarcity by having all IP addresses created equal, and everybody having as much upload than download speed (before xDSL).

Internet was the first network where everybody could create content and actually practice what some people call "freedom of speech" for a marginal cost. If you take away public IPs or introduce asymmetric bandwidth, it's not the Internet anymore: you are creating yet another passive consumption network where big corps and nation states tell you what to think.

I personally think asymmetric DSL is the worst that happened to the Internet so far. It's created the idea that there's different hardware and connectivity for clients (we the people), and for servers (fancy machines in datacenters)... two different classes of devices if you will. Nothing could be further from the truth, but this manipulation by the telco industry gave birth to the centralized hosting hellscape (GAFAM) that we know today.

s.gif
You are right that anyone having its own IP address would mean less censorship.

But also those IPs should be available without an entity having to assign.

And not only that but we would also need a free DNS system so you can't be denied host resolving.

And the domain names shouldn't be controlled by someone entity because you can be denied having one or the issuer can withdraw you domain.

Most censorship I've seen was done by DNS filtering. Also some domain names were withdrawn from their owners even if they didn't do something illegal.

s.gif
> IPv6 dates back to 1997 and it really should have been adopted more urgently.

I went to the NANOG meeting in October 1997. Many (most?) of the people who were responsible for administering the core routers using Internet Protocol at the time were there. During one talk they were talking about IPv4 running out, and mitigations for this - NAT, dynamic IPs, reclaiming allocated IP space, web servers that could serve multiple domains from the same IP address etc. One questioner went to the microphone and asked in a serious manner, "can't we solve all of these problems by rolling out IPv6?" The entire room broke into laughter.

s.gif
“ But this was only possible due to the history of Harvard within the advent of the internet and the fact that they had such an excess of addresses that Zuck could bind to a public IP address.”

This sounds interesting. Can you ELI5?

s.gif
I can speak using MIT as an example and I assume Harvard is the same way for the same reasons.

Big research institutions that were present when IP addresses were being allocated got A LOT of IPs by simply asking for them. Apple has the entire 17.0.0.0/8 range. Ford Motor Company has one, the US Gov has a lot [0]. Up until recently MIT had all of 18. (they sold something like half to AWS for a hefty sum not too long ago).

As a student (or visitor), when you joined the network (wired or Wi-Fi) you weren’t allocated some internal IP behind a router but a PUBLIC 18.something that was in the global address space because they had so many IPs available. This meant you could literally host something on the public internet from your dorm room because every device on the network was publicly routable by a unique public IP address.

[0] https://en.m.wikipedia.org/wiki/List_of_assigned_/8_IPv4_add... (see the last section on the original allocation)

s.gif
> As a student (or visitor), when you joined the network (wired or Wi-Fi) you weren’t allocated some internal IP behind a router but a PUBLIC

As an interesting detail, which seems alien today, is that this was also true at my various employers throughout the 90s. My desktops at work all had public IP addresses and were directly on the Internet, no firewall or anything.

I ran mail and web servers, fully internet accessible, on my work desktops (and lab machines). It was a natural thing to do.

s.gif
The router on the OP's network was probably just being a router. No fancy NAT junk, and probably no ACLs / fireballing. It was pretty common to have something like a T1 circuit, a CSU/DSU that connected to the T1 and presented a serial connection, and a PPP or SDLC connection to your upstream ISP over that serial connection. The router's Ethernet interface is connected to your switch (or hub) and all the hosts have IP addresses in the subnet your ISP assigned. Fancier shops might have a proxy server or dedicated firewall box between the LAN and the router.
s.gif
USC would disable any residential port trying to host a real server like that (i.e. not a game server or something). It's a research and education network, not your free ISP. If you have legitimate reasons, get a teacher's note and we'll let you. We watched the connection counts, we'll investigate the weird and probably disable your port and account and send you to Student Conduct. You have to fly under the radar, too many connections to other machines on inside (you're up to something), or too much traffic (you're up to something else). Then again, we were better at network than most other universities.
s.gif
This is an example of how the internet was originally intended: Every user of the internet has a public address that any other user can send and receive messages from.

The design works just like postal addressing. Your postal address contains the directions to your building from any location on earth. Even if you live in a dormitory building with many other residents, I can still send you a letter directly by adding "door number: 42" to your dorm's postal address.

IP addressing use numbers instead of English terms like "door" and "street". So I can't simply add "door number" to your building's IP address, your building has to be given enough addresses so each resident's computer can have their own. When your computer has a public IP address, I can send Internet packets directly to you.

Harvard was early to the slicing of the IPv4-address pie, so they had enough addresses each of their residents, including Zuck. Anyone with internet could put Zuck's IPv4 address on an Internet packet and it would end up on his computer. Most of these packets would be HTTP requests to facebook.com, to which his computer would reply with a page from the facebook website.

This is the internet working as intended.

But we ran out of IPv4 addresses in 2012, which has forced internet service providers to adopt an address-sharing scheme called network-address-translation (NAT) that makes it impossible to send letters directly to other people's computers. Imagine I wasn't allowed to put any room number or name on my letters. If I sent a letter to your dormitory, the staff there wouldn't know what to do with the letter and would be forced to return-to-sender or discard it. This is what NAT does, and it has turned the glory of the Internet into a centralized monster of control and censorship.

If you want to host a website with a public IPv4, only established cloud providers that obtained enough IPv4 addresses before it was too late can help you (primarily Amazon, Google and Microsoft).

The successor of IPv4, IPv6, brings enough address space for every person, their dog, their dog's fleas, and their dog's flea's microbes. We can go back to hosting websites from our dormitories, sending chat messages directly to our friends (not via Google, Facebook and Microsoft), and start new ISPs that missed out on the IPv4 pie that actually have a chance at competing with the likes of Comcast.

IPv6 reintroduces equity to the internet that facebook benefited from in it's inception.

s.gif
NAT was a thing much before ip addresses became scarse, is a key enabler in the "internets" ease of use as well as the principal ability to connect nearly double-digit billions of devices with about 200mio live addresses.

the end-to-end principle is mostly undermined by stateful firewalls and a total lack of secure-by-design in software developement, this will not change with ipv6

s.gif
> IPv6 reintroduces equity to the internet that facebook benefited from in it's inception

Except for the fact nobody can type, much less remember any IPv6 address.

s.gif
and how many people remember public ipv4 addresses besides a couple of easy to remember ipv4 addresses like 1.1.1.1 for instance?

rfc1918 address space is easily remembered because people use mostly 192.168.xx.xx. but ipv6 has the same idea and when writing it shorthand isnt significantly larger.

s.gif
When I worked at a company with about 5-6 servers and a couple fixed remote workstations, all the programmers knew all the IP addresses by heart, if there were names for anything but the www host I didn’t know them.

Obviously doesn’t scale, but I would assume this was normal back when you only interacted with say <10 servers.

s.gif
That’s a false issue nowadays. Basically any cheap router supports Avahi/Zeroconf/Bonjour … and allows you to reach any other machine of the network directly by its host name instead of its IP. There is not any reason to learn the IP address of your first MySQL server when you can reach it through « mysql-1 » or « mysql-1.local ».

You basically just need a router and an OS from the last two decades and your machines to have a defined host name (which your OS installer takes care of).

s.gif
This is true, which is why I expect mDNS and DNS to become standard even for local addresses.

I'm looking forward to using `router.local` over `192.168.1.254`.

s.gif
> Can you ELI5?

There was a time when the Internet was not divided between producers and consumers of content, but everyone was an equal netizen with publishing capabilities. Then came asymmetric connections, and datacenters, and the modern hellhole we all know to well.

It's never too late to act: many "community networks" are doing an amazing job to promote selfhosting and hosting cooperatives.

s.gif
That campus had loads of public IPs that students could run services like thefacebook.com from. Public IPs to boxen in your dorm-room.
s.gif
What about the security aspects of NAT?

e.g. Isolating subnets and restricting outbound access. Seems like a useful defence-in-depth mechanism in case of misconfigured firewall rules.

s.gif
NAT without port forwarding done by router facing the ISP, the default kind of NAT which the subscriber does not have to worry about configuring, has very great security aspect. Connect your home network devices to the router doing that kind of NAT,and this automatically protects all devices in your local network from incoming connections.
s.gif
It does protect all devices in your home network, until someone uses one of the various NAT slipstreaming attacks from a malicious ad and open up the web interface to your IP cameras without you noticing.

NAT has become too complex and most consumer versions of it are developed for ease of use over security. Don't trust NAT to protect your network, because the device doing NAT in your home network most likely wasn't developed to use it as a security measure.

s.gif
Isn’t that attack still possible on IPv6? It’s an attack on the stateful firewall connection tracking, not NAT.
s.gif
Yes, but that's an unintended consequence and cannot be relied upon. Any default-configuration firewall would achieve the same, but in a more reliable manner.
s.gif
> that's an unintended consequence

No, it's entirely unrelated to NAT. That's a consequence of the firewall on the router.

IPv6 doesn't get rid of the firewall.

s.gif
That was precisely my point. Thanks for clarifying :)
s.gif
It all makes sense until the "IPv6" part. I want a dedicated IPv4 address. Maybe the solution is not technical, but political. Even if the solution is technical rather than political, maybe IPv6 is not the best choice.
s.gif
I have a static ip4 address. It costs $10 per month.
IPV6 is a monstrosity designed by numerous committees. I’d take ipv4 and add few more octets for address
Domain parking/squatting is disallowed. You’d have to make the rules in some imperfect way, but it’d be better than what we have today.
s.gif
Domain squatting is like a disease, but how would you regulate it? Where is the line between just a parked domain that a lot of tech (and non-tech) people reserve for their future projects, and a domain acquired for re-sale? Given a good offer you would probably sell one of the domains you own just as well, wouldn't you?
s.gif
A fairly simple approach for all is to up the price based on contention. Regularly auction out the lease f.ex

Allowing “ownership” of such resources just leads to rent-seeking.

s.gif
How are trademarks registered? Why not use the same procedure for domain names?
s.gif
Except trademarks are only unique within a specific industry in a specific country.

In the case of Delta Airlines and Delta Faucets for example who would get to have "delta.com"? Then what about all the other countries with independent trademark rules and authorities?

I like the idea, but it would end up being way more complex than it seems.

s.gif
We could have delta.airlines and delta.faucets; and apple.computer and apple.groceries for example.
s.gif
Because this puts an individual with limited resources at a great disadvantage to corporations
s.gif
I have several domain names that I am not using but you're right
s.gif
I have more than 10 but I also agree. I’m not squatting, I have a plan for each one, and the threat of losing them would be a great motivator to actually build all these little projects!

OTOH there would probably just be “not squatting” services just like there were/are “under construction” landing pages.

s.gif
The threat of losing my domains would cause me not to register them in the first place, which would hamper little people like me with small ideas, but probably not actually hurt squatters much since they'd be actively protecting their investments.
DNS itself is certainly problematic.

Incorporating a time dimension to domains, such that it's explicitly recognised, and greatly restricting transfers, would be one element.

Ownership of domains should sit with the registrant, not the registrar.

Characterspace should be explicitly restricted to 7-byte ASCII to avoid homoglyph attacks. It's not a shrine to cultural affirmation, it's a globally-utilised indexing system, and as such is inherently a pidgen.

Other obvious pain points:

- BGP / routing

- Identity, authentication, integrity, and trust. This includes anonymity and repudiation. Either pole of singular authentication or total anonymity seems quite problematic.

- Security. It shouldn't have been permitted to be baked on.

- Protocol extensibility. Standards are good, but they can be stifling, and a tool for control. Much of the worst of the current Internet reflects both these problems.

- Lack of true public advocacy. It's deeply ironic that the most extensive and universal communications platform ever devised is controlled by a small handful of interests, none answerable to the public.

- More address space. IPv6 is a problematic answer.

- Better support for small-player participation. Servers are still highly dependent on persistence and major infrastructure. A much more robust peered / mesh / distributed protocol set that could be utilised without deep expertise ... well, it might not make things better, but we'd have a different problem-set than we face presently.

- Explicit public funding model for content, and a ban or very heavy tax on most advertising.

- A much keener, skeptical, and pessimistic early analysis of the interactions of media and society.

End to end encryption and tracking-resistance at a low enough protocol level that most developers or users would never know the pain of even thinking about either.

Imagine how many times security and privacy have been reimplemented in different contexts.

And that patchwork approach will incentivize security breaches and manipulation through dark surveillance until ... no end in sight.

s.gif
Though the lower you go, the harder it is to upgrade. SSL/TLS has been evolving regularly. In fact if you connect an old machine to the internet, it is the one thing that is likely to make it incompatible with the current internet.
s.gif
Distributing keys gets harder the lower you go.

Although even non authenticated encryption is nice to prevent passive eavesdropping.

s.gif
Technically tls is end to end encryption. Between you and the service provider.
I think the most crucial functionality missing from a security standpoint, is the ability for one IP address owner to tell another that they're refusing service, and the owner of the refused IP being required to filter that traffic at THEIR routing layer. This would effectively eliminate nearly every type of DDoS, and it would also shift the responsibility to handle the attack away from the target's provider, and place it squarely on the providers of the compromised systems -- which is how it should be.

Imagine a world without DDoS or Cloudflare's 3+ second redirect.

s.gif
>the owner of the refused IP being required to filter that traffic at THEIR routing layer

What's to stop a malicious actor from just ignoring the refusal?

s.gif
Namely that an IP "owner" in this scenario could and should also be the participating upstream provider(s) between the requesting IP and malicious actor. If a particular provider isn't participating in the system, the traffic can still be blocked by one that is -- if not blocking the entire range.
s.gif
And that would also be used in rate-limiting and integrated to apps so no need to implement it manually.
Web app capabilities built in to the foundation of HTML and the browser.

Elements that know they're user-editable (images with upload controls attached to them to replace the image that's there, dates that trigger calendar controls when clicked, and inline editing for other elements that actually works and is consistent across browsers).

An offline database in the browser that has the concept of user accounts and resetting passwords baked in, as well as subscription payments, and has a universal API that can be automatically deployed to any hosting platform.

All of this would make building web apps and games trivial for so many more people -- write some basic HTML, upload it to one of a million platforms of your choice with a click, and you have a testing ground for a product that can grow.

It would be a way for anyone on the internet to build something useful and get it out there without learning a thousand technologies with a thousand nuances.

s.gif
You are talking about the web, not the internet.
IPv6 gets mentioned plenty, and I will take the side that it should have been rolled out WAY sooner than it did, and it should have been rolled out in a way that made it easier to do what I call the Apple Method: Just get it out there and the people will adapt to it.

DNS is a horrid mess that should have been designed with ease of reading in mind. And I know that DNS was designed way before it, but I think a data transit format similar to JSON would have made the system a bit more extendable, and given people a chance to make it a more dynamic system.

E-Mail was brilliant in it's time, but for the love of all things explosive and holy is it bad. Just the fact that in it's base design there is no E2E encryption going on is a problem.

My biggest beef with the current internet is HTTP. Everything is built on it, and it isn't the greatest system. There have been so many systems and protocols implemented that did so many things, FTP/gopher/irc/etc, and most of them have gone the way of the dodo. A few hold-outs in the dedicated tech world will still use irc, but we could have done so much with cloud-based systems with FTP. And if we had a new spec for irc, would we need Slack/Discord/MS Teams/etc? They could then all talk to each other. We shouldn't be trying to reinvent the wheel, we should be using these older services in our platforms.

And don't get me thinking about cloud. The worst term that a marketing team got a hold of. At it's core, it's just somebody else's computer. And again, so much of it is build on HTTP protocols. Not many people know or remember that xWindows and X for *nix systems had a distributed system built in. Log into a server, set one environment variable to your IP address as long as you were running one of these systems yourself, and you could runs programs on the server with the GUI on your computer.

s.gif
This touches on what I think is really wrong with internet innovation. We haven’t adopted a new wide spread protocol since the early 2000s because all big tech wants to silo their user base or make a protocol that gives them a massive first mover advantage(AMP, RCS).

I wish we could make new protocols at all.

s.gif
> And if we had a new spec for irc, would we need Slack/Discord/MS Teams/etc? They could then all talk to each other. We shouldn't be trying to reinvent the wheel, we should be using these older services in our platforms.

The reason we have all those separate systems is not that there are no alternatives: irc could have evolved with a new spec, but there is also XMPP (Jabber)... The reason is that all those systems like Slack/Discord/MS Teams do not interoperate with each other is that they are developed by companies that need to make money, and they want to force and keep users on their systems.

I think email is the only communication protocol that is still very popular and works across providers. I don't think it will disappear anytime soon. At this point, email providers cannot lock their users into their own system: no one can imagine that you'd be only able to email other gmail accounts from a gmail account or other microsoft accounts from a microsoft account.

s.gif
> so much of it is build on HTTP protocols

this is mostly an artifact of how most firewalls are configured to only allow "neccesary" stuff; this also applies to ipv6 and hence all dreams of it re-enabling end-to-end connectivity are kinda moot.

s.gif
If you read the early RFCs (say, RFC-1000 or earlier) you'll find that FTP was the go-to protocol of the day, much like HTTP has become today.
Can we fix email so that it provides authentication, integrity, and confidentiality protections by default? And also while we're at it make it support binary attachments so that we're not stuck wasting bandwidth and disk doing base64 for everything?
s.gif
why not just extend the email protocol to really support open calendars, open social networks (decentralized) and open p2p chat?

email is a nice way of addressing, but we underuse that address compared to all the balkanized post-email systems.

s.gif
There are efforts in those directions:

https://delta.chat/en/ implements encrypted IM over email.

https://jmap.io/spec-calendars.html is the calendaring spec for IMAP's likely successor

https://webfinger.net/ is the (appallingly named) http protocol for finding named people such that social connections can be made and managed. Some extensions exist for services that commonly run with email that provide webfinger services.

s.gif
Delta chat look very interesting, thanks for mentioning it. What are the possible downsides? Looks like a winning tech to my non-tech eyes.
The "browser" is a blank execution sandbox with a rendering context. The remote server sends programs (using something like WASM) with a standardized ABI. The program can use the rendering context to put stuff on the screen or receive user input.

Indexing and page interoperability is done by exposing standard functions which yield the necessary metadata. For example, if you want your site to be indexable by a search engine, you expose a function "contentText()" which crawlers will call (and which the client browser might also call and display using the user's own reader app). In the simplest case, the function simply returns a literal.

Core resources like libraries would be uniquely identified, cryptographically signed, versioned, and shared.

If someone wanted to make use of a browser pipeline like the standard one we have today, they might send code which does something like "DOMLibrary.renderHTML(generateDocument())". But someone else could write a competing library for rendering and laying out content, and it might take hold if it's better, and it wouldn't have to be built on top of the first one.

Also, the browser wouldn't necessarily be a separate app (though someone could make one); it would be a standard built-in feature of operating systems, i.e. the sandboxing would be done at the kernel level. With security properly handled, there'd be no difference between a native app and a web app, except whether you load it from your own filesystem or a remote one.

s.gif
A nightmare for screen readers and other accessibility tools, and anyone who needs to rely on them.
s.gif
Unblockable ads and popups, uncopyable text, yay!
s.gif
You can still block ads by recognizing them at the pixel level.

Same for copying text.

Advantage: you can also select text in images, and block ads that are images.

s.gif
Java might have been that, if HTML and all the other web tech had never happened and Mosaic, Netscape Navigator, etc. had started out as Java sandboxes.
s.gif
If Java has obviously not outcompeted HTML, one should reflect, in which way HTML is obviously better in things that matter compared to a code sandbox.
s.gif
Java, Silverlight, Flash, Web Assembly.

But the problem is that HTTP is more suited to deliver HTML and browsers were designed primarily to render HTML.

s.gif
Maybe this could be approached by compiling the servo browser engine to wasm...
Security by design. Old internet protocols have been built with the mind of "we're all friends here, we won't make each other's life hard, right?" Well, that didn't exactly work out. That's why we have spam, DNS hijacking, DDoS, botnets, etc. If some prophet could have convinced people that built early internet that security is as much of a concern as, say, availability and fault-tolerance - I am sure we would have much less of these problems.
Get rid of most of JS to prevent 'appification' and keep the focus document centered.

Edit: I think this would result in protocols over walled gardens. The problem is JS makes HTTP/HTML everything to everyone.

s.gif
Be careful what you wish for. JS and HTML5 is why we don't have Adobe Flash anymore. There was a whole period in the noughties when web interactivity sucked and so external plugins like Flash, Silverlight and Java applets took up the slack.

The alternative to JS is not "no scripting", it's websites that only function with proprietary plugins installed.

s.gif
In my opinion, HTML5 and Flash have always served different audiences.

Flash got axed because the way it was implemented into clients was wasteful, insecure and a bad user experience in general. Same with Java; you can't make the same tools that Java applets once provided through the browser because of the browser's security model, so I'd say the two never competed.

Making a quick interactive animation that works across all kinds of resolutions and sizes was trivial in Flash, but in HTML5 this is a challenge. HTML5 and friends fixed the problems that made web developers grasp at Flash for, like file uploads, animations and predictable interaction with the mouse and keyboard. These features were often already possible in browsers, but Flash was the only tool that made them appear the same in every browser available. They didn't replace the flash scene for games, interactive simulations and other online experiences. The Flash game scene pretty much died when Flash started getting blocked by Chrome, with only a small subset of the community fragmenting and finding their way to frameworks like Unity Web and its then plentiful competitors with high learning curves because they were designed for the "real deal" game developers rather than self-taught animators that turned game dev.

I think JS is a necessity for the web to exist today, but we need an alternative for what once was Flash, Java, Shockwave and more. Too many features have been shoved into HTML and Javascript that have no business there, left to be abused by trackers and hackers alike.

s.gif
It's not about proprietary vs open, it's about not mixing document rendering with code execution.

Should MS Word and Adobe PDF also execute code? Should I only be able to run an application only by executing it in MS Word and Adobe PDF reader?

s.gif
> Should MS Word and Adobe PDF also execute code?

Oh boy do I have news for you! https://helpx.adobe.com/acrobat/using/applying-actions-scrip...

Not sure about Word as I barely know it, but I'm sure you can execute some sort of code. If not JS, probably VBA or other Microsoft language.

s.gif
There is a lot of stuff that html should do straight out of the box and that should never have required javascript, like form validation, autocomplete, date pickers, video streaming, etc. Javascript would have been limited to actual SPA instead of being required today even to render static content.
s.gif
> form validation, autocomplete, date pickers, video streaming

I'm confused by this comment - HTML5 does do all that out of the box. Are you saying it should have done it from the start?

I don't get the bit about JS being required to render static content either...

s.gif
Yes, people started using javascript for basic things that should always have been part of the html syntax (when the web became dynamic which was the late 90s), to make up for what wasn’t in html.
s.gif
> The alternative to JS is not "no scripting", it's websites that only function with proprietary plugins installed.

Yes and those were websites that for the most part nobody gave a crap about. The important sites weren't willing to dump so many of their viewers. Heck, even post-JS, the web didn't suck so badly until the option of shutting off JS was removed from browser UI's. Before that, enough users shut off JS that sites wanting wide audiences had to be able to function without it.

s.gif
I think you're underestimating the amount of sites that had flash.
s.gif
Yes, even getting copy /paste functionality to allow a user to click a button to copy a piece of text needed a bit of flash.

https://stackoverflow.com/questions/6355300/copy-to-clipboar...

s.gif
No one cared about YouTube? No one cared about Netflix?
s.gif
People were willing to set exceptions for sites they thought were important. Not important AND requires JS => bye.
s.gif
I'm not sure the likes of YouTube and Netflix need further privileges compared to the rest of the web.
s.gif
> I think this would result in protocols over walled gardens.

I don't see how it would. I think it would have led to something equivalent to JavaScript, because that's exactly the route we took to get to here. The WWW started as just documents, and there were plenty of protocols for other internet things. Businesses and consumers (and almost everyone else) want more than documents, and avoiding requiring people to install and run a separate client for every purpose (and requiring developers to build said cross-platform clients) led to plug-ins and then capable JavaScript.

s.gif
I feel that it will just result in more walled gardens.

Eventually, someone would try to ship everything to everyone through the internet, and they'll figure out a way. It's all just byte streams anyway. Perhaps something like java applet or Flash, or some worse version of "Click this to install our plugin".

s.gif
Encourage (coerce) writers of blog-like content to use a simplified format closer to RSS embedded articles or Gemini, making reader mode Just Work instead of relying on hacks to identify the portion of a deeply nested table/div/CSS soup that corresponds to an article. And replace CSS's hierarchical ruleset with a declarative format with a finite number of "axes of customization" (page background/foreground, font, size, spacing, header font and size, code bg/fg color, font, and size) which can be individually overriden or turned off altogether to create a "reader mode" experience.
s.gif
That seems to be Web-specific, not general to the Internert as a whole.
Would redesign BGP with more resistance to Byzantine actors. Not so much because many folks announce routes maliciously (though that definitely) happens but because people always manage to screw it up accidentally.

Make it easier and more equitable to obtain addresses and ASNs.

Build protocol to make edge/cloud computing more fungible. Similar to folding@home but more flexible and taking into account networj connectivity rather than just CPU cycles. Probably looks a lot like Cloudflare Workers/Sandstorm but no vendor lockin.

DNSSEC only. TLS only for HTTP/consumer facing web.

Actually on the topic of that probably something simpler than TLS to replace TLS. It has too many features. x509 can stay but the protocol needs reworking for simplicity.

s.gif
TLS 1.3 is kind of complex, but with less optional stuff is not too bad. X.509 is a nightmare of complexity, but at the same time is inflexible in useful ways. Certificate validation when not every node has the same list of CAs and some CAs are expiring is pretty hard to bulletproof.
Just a thought experiment: but could we have designed better protocols to not need companies like Google (search) and Facebook (social network)...

For example: a public index service (akin to DNS) where all pages upload all hyperlinks they were using. The end result is a massive graph that you can do PageRank on. You'd have to add some protections to avoid it getting gamed...

Email was the first decentralized social network and with it came bulletin board services and groups. Could these concepts have been developed a bit further or been a bit more user friendly while remaining decentralized?

s.gif
the first search engines worked this way - you'd submit your site to their index in a certain category with some keywords, and anyone could explore the index. they all seemed to realize around the same time that they ought to keep their indexes secret from one another and they vanished just like that.
Asymmetric keys as the only addresses, and an authenticated version of Virtual Ring Routing as the routing protocol. That guarantees you're talking to the correct server for an address, that addresses never need to change, and that a server can have as many different addresses as it needs to.

Build in a layer for onion routing as well, so that all the servers in the middle don't automatically know who you're trying to reach.

s.gif
You have very closely described Yggdrasil, which is a proof-of-concept for these sorts of ideas: https://yggdrasil-network.github.io/
* A better IRC with security, anonimity, scalability so people don't actually need Slack, MS Teams, Zoom, Google Meet, Whatsapp, iChat and tens of other apps just to be able to talk with each other.

* A protocol for running distributed binary apps. Transforming the browser in an operating system sucks for both users and developers.

I would have tried to arrange it so that whatever method comes to the mind of the average person for publishing a document on the internet publishes a static document.

The situation today of course is that the method that usually comes to mind when a person decides that something he or she has written should be put on the internet publishes what is essentially an executable for a very complex execution environment, and (except for a single non-profit with steadily decreasing mind share) the only parties maintaining versions of this execution environment with non-negligible mind share are corporations with stock-market capitalizations in the trillions.

s.gif
What's especially painful is that this isn't actually an inherent problem, just a shortcoming in tooling. If WordPress had defaulted to generating static HTML as its output, even just as a preferred caching method that was enabled by default, you would have basically solved this problem.
I've thought about this a lot. I'd throw away HTML/CSS and start over with a client centric rendering protocol that is based on presentation first and semantics second. The language would be run-time compiled to describe exactly what needs to be rendered on the page and would be streamable to prevent rendering locks.

Typical page size would go from 1M typical to 64K-128K being typical. Images would stream in after initial page renders but since most pages would fit in 1-3 packets, you'd see pages pop-in very quickly. This would also be very helpful for poor connection, mobile and the developing world.

I'd fund a team to do this if I could figure out who would buy it.

s.gif
Would this really change page sizes significantly?

Take amazon landing as an example:

* HTML/CSS: 80 KB and 88 KB

* Images: 3.2 MB

* Font: 180 KB

* JavaScript: 260 KB

Reducing HTML/CSS to 1 Byte each, would make nearly no difference, because huge parts of pages are in images.

s.gif
How would this handle accessibility?

64-128K page size is perfectly doable in HTML today if anybody actually cares enough to do it. How much of the bloat in say an Amazon page is the JavaScript?

Is there something about your protocol that would prevent the big players from re-normalizing 1M page size with it?

s.gif
Could you explain how this is different from SVG?
s.gif
SVG is still a markup language, not one that is compiled or streamable. It's also not relational. While you can design a page with SVG it's not parametric in that you can't modify the content or the page and have it re-flow, resize or act responsivly.

I would also describe SVG as a very verbose instruction set rather than a reduced instruction set.

s.gif
That’s a cool side project! Build a modern site in svg. I guess you’d need JS to make it responsive
1. SCTP instead of TCP (doesn't suffer from the maddening layering violation in TCP/IP w/ the host IP being part of the TCP tuple). This would solve mobile IP.

2. SRV RR's instead of "well known ports". Solves load balancing and fault tolerance as well as allowing lots of like-protocol servers to coexist on the same IP address.

3. Pie-in-the-sky: IPv4 semantics w/ a 96-bit address space (maybe 64-bit).

For DNS specifically, CNAME responses applying for the whole domain name instead of a specific record type was definitely a mistake in hindsight. It's just because of that decision that we can't have CNAMEs at the apex, which definitely is in the way in a bunch of situations.
I would wish there was something better for SMTP.

I tried to learn it a while ago and got super frustrated with how things are. The whole thing looked upside-down to me.

I mean The DKIM, SPF, DMARC, Bayesian filtering, etc sound like band-aids upon band-aids to fix something that's really broken inside.

Maybe make the DNS less US-centric (e.g. .com vs. .co.uk) and replace the PKI with DANE.

We should have had DHCP prefix delegation for IPv4 so people wouldn't need NAT.

s.gif
DHCP-PD for IPv4 would eat up all the addresses on day one. It only makes sense to automatically delegate prefixes when those prefixes are an abundant resource.
s.gif
Yes, and then the motivation for IPv6 would have arrived sooner.
s.gif
So basically, we should have used IPv4 addresses more profligately in the '90s, in order to pull the runout date forward from 2014 to 199x-200x, increasing the urgency of the transition.

That's not the worst idea I've ever heard.

s.gif
In DHCP-PD you get a subnet instead of a single IP.
I want a network that scales to trillions (or more) of top-level nodes (rather than the ~2 million supported by IP and BGP.) I want everyone to be able to host a router.

I'd go source-routed isochronous streams, rather than address-routed asynchronous packets.

I haven't updated my blog in a few years, but I'm still working on building the above when I have the time. (IsoGrid.org)

DNS is far from perfect, TLDs are stupid and always have been, the arbitrary rules for records are stupid, MX records are ridiculous, UDP size constraints on answers.. nevermind I could keep going. DNS sucks.
s.gif
Also: DNS resolvers and ISPs are the reason for amplification attacks.

ISPs should simply check where the UDP traffic is coming from, and filter out packets that have a different UDP source address inside them.

This would literally make the internet DDoS free.

s.gif
There are DDoSes that are based on botnets without specifically relying on amplification or reflection. There have even been DDoSes relying on user-contributed resources:

https://en.wikipedia.org/wiki/Low_Orbit_Ion_Cannon

Amplification makes a DDoS bigger, but isn't what makes it a DDoS.

s.gif
> Amplification makes a DDoS bigger, but isn't what makes it a DDoS.

I know, but large botnets like MIRAI and similar usually take advantages of how UDP flooding works, because TCP RST packets don't scale as well.

I was just mentioning that because ISPs could easily get rid of most UDP-based DDoS attacks if they would watch their networks' packet sources (what they're doing anyways).

Usually it's literally just a comparison of IANA assigned IP ranges (of source+victim being inside or outside), and if they're mismatching, it's very likely to be a UDP flooding attack.

I have this experience where I come into new orgs or projects and I think if we could just start fresh with the old lessons, we could build it so much cleaner. It’s partially true. But mostly we end up building a really clean base, then as the business requirements come up, we need to add back in all the things I thought were kruft. I think lots of tech is this way. Try to build a new DNS, you’ll just end up reinventing a (probably worse version of) DNS.
s.gif
even before new requirements come up, some of the idiosyncrasies or edge cases would be overlooked in the new version and the project would take much longer than anticipated.

if the current tech is so messed up to warrant a re-build, why would one expect that all the requirements are properly captured?

having said that, stepping back and articulating the existing problems could be a viable first step. Identify and scope the problems before dreaming of solutions. /product engineer and party pooper hat off

Not sure if that should be part of internet or web.

But a decentralised way to do an internet search. So an anbiased/no tracking search engine

One small thing: spell referrer correctly.
The number one change I'd make is not to trust users to behave.
Anonymity built in, ever since I heard Eben Moglens story about it.
I would prefer if the internet architecture was a mesh network and that sites/pages were content addressable.
I'd make a content addressable system to have better caching, balancing, backups and history of changes. I'd like the internet to preserve its knowledge into the future.

As for security, I'd enhance the web browsers to do internet hygiene - removing unwanted bits and blocking information leaks. The browser should protect user accounts from being linked together.

Another major issue is payments which today is a high friction backwards process and acts like a gatekeeper. We need an equitable solution for micro-payments without too large processing costs.

The last one is search - users can't customize the algorithmic feeds it generates and startups can't deep crawl to support inventing new applications. Search engine operators are guarding their ranking criteria and APIs too jealously while limiting our options.

So in short, ipfs/bittorrent like storage, nanny browser, easy payments and open search.

JavaScript would be murdered over and over again..
HTTP should have had some kind of URI event pub/sub mechanism from the start, including notifications for URIs pending removal to facilitate archival without polling or whatever crazy ad-hoc madness you want to call archive.org's priceless efforts.

Still an unresolved issue to this day AFAIK.

s.gif
Yes but how would you implement this on a low level? How would a system know something had happened without some kind of polling lurking below?
s.gif
Whether the web server in question polls files in a posix filesystem for mtimes/ENOENT, employs inotify, or has a higher-level CMS type deal to trigger events when the CMS modifies/adds/deletes resources kept in an RDBMS is an irrelevant server implementation detail.

I think the real challenge to implementing this kind of thing is preventing abuse.

If servers were to accept subscriptions on resources under their purview in the form of an arbitrary URL to poke when there's an event, it's ripe for abuse without some additional steps. But I think something along the lines of what letsencrypt does would be sufficient; make the subscriber prove control over the domain to be poked by asking it to reply with some gibberish at a specific url under its purview before accepting the subscription at that domain, and you'd throttle/limit those operations, just like letsencrypt does. At least that way you're not turning subscriptions into DDoS attacks via notification events sent to unsuspecting domains...

Here is an academic discussion [0]. It was published in 2005. It may not cover more recent problems, but the proposed design principles are still valid.

[0] Tussle in Cyberspace: Defining Tomorrow’s Internet https://groups.csail.mit.edu/ana/Publications/PubPDFs/Tussle...

A slightly different answer: I would change the domain name ownership model. Right now to buy Kayaks you go to amazon.com, to travel to the amazons you go to kayak.com, it's confusing for end users . It's also empowering squatters and not the innovators. Also the whole .org debacle showed that community representation is not at the center of things.
s.gif
If you go to, say, apple.com can you buy apples or Apple computers? Would each page be like an index with subpages disambiguating things like https://en.wikipedia.org/wiki/Apple_(disambiguation)? Or it would show about the fruit with a note for Apple Inc like https://en.wikipedia.org/wiki/Apple?
I’d make sessions independent of IP addresses. So that as I move between wifi, fixed and mobile networks, my sessions would remain active even as my interface addresses change.
s.gif
Session persistence can be achieved on top of IP.

E.g. for ssh, there's mosh (uses UDP to work around the control part of TCP).

Come to think of it -- everything I use is either stateless (http), or can use ssh for transport (ssh, sftp, sshfs, rsync, ...).

I'm sure some other protocols still break, but I haven't felt that pain in years!

More client side retry built into protocols, and the ability to easily failover from the client-side.

Protocols like DNS and SMTP are designed so that multiple servers that can handle traffic, and one going down isn't a big deal - the clients will just retry using and the entire system just keeps working.

Compare to HTTP which don't have retry mechanisms which results in needing significant engineering to deal with single points of failure, fancy load balancers and other similar hard engineering challenges. Synchronization of state across HTTP requests would still be an issue, but that's already a problem in the load balancer case, pushed usually to a common backing database.

I would design a protocol to make access to porn handier.
s.gif
Finally, someone who actually understands the user requirements and expectations for the internet.
Some way to make spoofing harder. Complete cryptographic authentication and filtering of every packet is probably technically unfeasible even now. But certainly we can think of other "soft" measures.
s.gif
Could that backfire? I mean, if something becomes harder to pull off, but still not hard enough to block bad actors from doing it, wouldn't we end up in a situation where strong authentication is perceived as not being "that important" because "what are the odds that somebody is spoofing the packets?"
s.gif
Wouldn't that be still better than current situation where you can't run any public service anymore without DDoS protection? The attackers have it too easy.

I don't see how would that preclude strong authentication. We would need TLS all the same.

By making it truly decentralized: Like in a mesh network.
A mail protocol that guaranties the identity of the sender + e2e encryption.

And making more top level domains available from the outset instead of the fetishisation of the .com domain.

Larger IP address space.

E2EE for all major protocols from the start (DNS, HTTP, SMTP, etc)

Protocols for controlling unwanted email and analytics (a not disastrous version of EU’s cookie consent)

Protocolls (?!) ...and the idea to pump 'content' - instead of launching it?

Sounds elevating... P-:

Disclaimer: 'This position is constructed to deal with typical cases based on statistics, wich the reader may find mentioned, discribed or tendency-based elaborated.'

Sure, OT -but you may also like thinking other way about (-;

Use whatever technology is required to make it decentralized so tyrants and other evil do'ers have no chance of stopping free speech. The future of the world depends on it vs. the future of a relatively few self-annointed, banal, vile, narcissistic, wealth stealers.
s.gif
I don't think free speech should be an absolute, there are situations such as vaccinations during a pandemic when free speech kills. Freedom vs security trade-off.
I first understood the question as "what should have been done different if internet was designed today". I think it is a more interesting question. What mistakes were made that later needed less than optimal workarounds? What comes to my mind are character encodings and i18n. It was solved in many different ways in different protocols.
I'm not sure how this can be done, but remove location association from IP. It annoys me to no end that google changes the language based on my location rather than my preference (sometimes with no obvious way to fix it). It would also stop the annoying country restrictions on websites.
s.gif
There's Accept-Language, except that it's not really usable. Nobody sets it, so nobody relies on it, and anyway you can't tell it useful things like 'i know these 3 languages, in preference order, but if the original content is either one, use that instead of a translation to my more preferred language'. Websites that use IP to guess which language or country site you prefer aren't great, but there's nothing great to do.
s.gif
There is no inherent location information in an IP address, but those packets have to physically go somewhere at 2/3 light speed. We could design a network like Tor where the routing is geographically oblivious, but then everywhere on Earth would have the same (very slow) ping time.

It is saner to make a fast network, and then layer an anonymous one on top of it.

s.gif
starlink is the current solution that I am hoping for
s.gif
Starlink raises the speed from 2/3*c to 1*c, but IP addresses are still anchored to a location... otherwise the latency would suck.

You can geolocate any fast internet connection by measuring its latency to various points on Earth.

s.gif
> You can geolocate any fast internet connection by measuring its latency to various points on Earth.

How would you be able to differentiate 20 degrees east of the server with 20 degrees west of the server?

s.gif
Geolocation of IP blocks is basically all Google et al's fault. Once you go around wardriving to map all the infrastructure.... Well.. you get what they paid for.
s.gif
GeoIP stuff existed long before Google Maps. It doesn't help that ISPs put tons of geographical info in reverse DNS. (Once upon a time my ISP's first-hop router had my street name in it!)
I would modularize-separate SSL and HTTP, so that the former was a local service which provided a local HTTP connection, and I could keep using browsers I love with most websites, not just my own.
The biggest mistake was making it trivial to forge email headers.

While the spam problem is much better than it used to be, that's because there's a whole lot of behind-the-scenes (and expensive) infrastructure devoted to combating it.

That infrastructure has also made it considerably more difficult to run your own mail server. While you can still do it, there are many hoops to jump through if you want to keep your mail from being dropped on the floor by the first overzealous spam filter it encounters. Unless you're big enough to devote a staff just to that (or you have a lot of free time) it's easier (but more expensive) to just use MailGun, MailChimp, or one of their brethren.

I would guess that the annual cost of spam is somewhere in the billions.

s.gif
>> The biggest mistake was making it trivial to forge email headers.

Thanks to VoIP, we've seen a surge in phone numbers being spoofed with robocalling. I'd like to see some form of authentication applied to this and to email.

Performance, specifically making requests as parallel as possible.

Does it really need to take 5 seconds to load a website whose contents total 500kB on a 100Mbit/s connection with a 6-core CPU?

Several responses have mentioned removing anonymity from the equation to enforce good behavior.

I think we could accomplish something similar just by having a global SSO that works for any website, totally decoupled from your actual identity or location. The only information it reveals might be some sort of quantifiable social standing based on how you interact with others on forums.

IP address per device instead of per network interface. Lisp instead of JavaScript.
s.gif
and how would routers work in this case?
Personally. I'm targeting the browser. Web applications have evolved so much that we now need access to more hardware to further improve our web apps. Imagine having direct access to GPU and other things, we can then be running full apps in the browser, think Photoshop/Final Cut Pro, even games that required full 3D rendering. To an extent, this also requires us to remove the shackles from only having JavaScript as the only allowed scripting language in the browser.
s.gif
You'll never get frictionless money transfers. Stop asking.

Not trying to be glib, but it's a political and law enforcement necessity that not every Tom, Dick, Harry, and Jane be entry points to the financial system.

I'm not saying I agree with the justification for it, but as someone who has seen how it works... It is alas, an incompatible outcome.

Make the actual protocol for HTTP always be TLS, always port 443. Insecure HTTP is just self-signed.
The web was once diverse, but now it is not.

I would eliminate third party requests for page assets. If you need images, css, js, or such it would come from the same origin as the page that requests it.

1. This would eliminate third party tracking

2. This would eliminate abuse of advertising

3. This would eliminate CDNs and thus force page owners to become directly liable for the bandwidth they waste

4. It would make walled gardens more expensive

I would also ensure that section 230 is more precisely defined such that content portals are differentiated from the technology mechanisms on which such traverse. The idea here is that Section 230 continues to protect network operators, web servers, and storage providers from lawsuits about content but not the website operators that publish such.

1. Sites like Facebook and Twitter would become liable for their users submissions regardless of moderation or not

2. Terms of service agreements would largely become irrelevant and meaningless

3. This would radically increase operational risks for content portals and thus reinforce content self-hosting and thus a more diverse internet

I would ensure that identity certificates were free and common from the start.

1. The web would start with TLS everywhere.

2. This would make available models other than client/server for the secure and private distribution of content

3. This would, in joint consideration of the adoption of IPv6, also eliminate reliance upon cloud providers and web servers to distribute personal or social content

No remote code execution is allowed on the web. You can download binaries and run them, but not in the browser's context.
s.gif
So instead of running code in the browser's sandbox, you have to run it at user level? Sounds terrible and this is what things where like in the 90s. Say you wanted to do some banking, you would have to download and install the bank's application, which usually only ran on Windows.

No thanks.

Security.

I recall a possibly apocryphal story about TCP - namely that it was originally meant to be encrypted. Supposedly it was the NSA who had gentle words with implementors which led to the spec not including any mechanisms for encrypted connections.

So, encrypted by default TCP, for a start.

DNS should have a much simpler, possibly TOFU model to help with the usability elements. DNSSEC is just a nightmare and confidentiality is nonexistent.

Somewhat controversially, I'd ditch IPv6 in favour of a 64bit IPv4.1 - blasphemy, I know, but the ROI and rate of adoption for IPv6 don't justify its existence, IMO.

s.gif
what would 64bit ipv4 solve that ipv6 doesnt? the migration path will still be just as painful.
I'd make it less transactional and more realtime. No http request, tcp packets instead more like old tv broadcasts where there is a continuous signal always being sent.

If it was possible to have infinite radio frequencies to use. Then you could load every possible webpage of every website continuously broadcast on a different frequency. To load a page all you have to do is tune in to that frequency. You will then get an instant latest version of that site without having to wait. This gets more complicated for signed in websites. There is no reason you couldn't implement the same thing just add more frequencies for more users. This wouldn't work for POST requests, but I think any GET request would work fine.

s.gif
This would mean that in order to send something back to that server you would need to send an equally powerful radio signal that could reach back to where the server is. Its not the most convenient thing for some people to have a high power radio transmitter in their backyard.
Everything can always be improved. But there are some things that would make life for a lot of people easier.

For one thing, operating systems' tcp/ip stacks. They should have come with TLS as a layer that any app could use just by making the same syscall they use to open a socket. For another, service discovery should not be based on hardcoded numbers, but querying for a given service on a given node and getting routed to the right host and port, so that protocols don't care what ports they use.

Domain registration and validation of records should be a simple public key system where a domain owner has a private key, and can sign sub-keys for any host or subdomain under the domain. When you buy the domain you register your key, and all a registrar does is tell other people what your public key is. To prove you are supposed to control a DNS record, you just sign a message with your key; no more fucking about with what the domain owners email is, or who controls at this minute the IP space that a DNS record at this minute is pointing to. This solves a handful of different problems, from registrars to CAs to nameservers and more.

The security of the whole web shouldn't depend on the security of 350+ organizations that can all independently decide to issue a secure cert for your domain. The public key thing above would be a step in the right direction.

BGP is still ridiculous, but I won't pretend to know it well enough to propose solutions.

Give IPv4 two more octets. It's stupid but sometimes stupid works better than smart.

Give the HTTP protocol the ability to have integrity without privacy (basically, signed checksums on plaintext content). This way we can solve most of the deficiencies of HTTPS (no caching, for one) but we don't get MitM attacks for boring content like a JavaScript library CDN.

And I would make it easier to roll out new internet protocols so we don't have to force ourselves to only use the popular ones. No immediate suggestion here, other than (just like port number limitations) it's stupid that we can't roll out replacements for TCP or UDP.

And I would add an extension that encapsulates protocol-specific metadata along each hop of a network. Right now if a network connection has issues, you don't actually know where along the path the issue is, because no intermediate information is recorded at all except the TTL. Record the actions taken at each route and pass it along both ways. Apps can then actively work around various issues, like "the load balancer can't reach the target host" versus "the target host threw an error" versus "the security group of the load balancer didn't allow your connection" versus "level3 is rejecting all traffic to Amazon for some reason". If we just recorded when and where and what happened at each hop, most of these questions would have immediate answers.

s.gif
Ads don't have anything to do with the design of the internet, and if you want free expression to be possible, so are ads. Otherwise you'd need every scrap of content anyone makes to go through a censor before being made available.
Separate the portions of the web into non-profit and for-profit sections. Web 2.0 ruined the internet with the introduction of paywalls and 'premium' content that was never premium in the first place. You're welcome to live in that dystopian hellscape, but I'd frankly prefer the ability to hit a switch and instantly disable all bullshit.
Enstating a capital punishment for the use of any scripting language
identity as a core layer. having been in the wrong end of a large service that was relentlessly attacked and spam. i understand the value of anonymity but it probably should not have been the default.
s.gif
While I deeply disagree (go look at Facebook and tell me with a straight face that a real name policy fixes spam), I truly appreciate you actually saying that under your real name; the number of times I've seen anonymous posters arguing against anonymity...
Cryptographic identity verification and trusted clocks. As in, cryptography would be built-in and everyone would have a set of keys that they could use to verify ownership of digital content by using cryptographic signatures and timestamps.
s.gif
how would you get those trusted clocks to agree with each other?
s.gif
Through a synchronization protocol but it's obviously impossible to have perfectly synchronized clocks so the safest thing to do would be to use several timestamps from several clocks.

Blockchains in a way are such clocks where the ordering of the blocks on the chain provide an implicit counter. But blockchains have too much overhead so by assuming some trust it would be possible to get rid of the proof of work mechanism and just use a signed timestamp from a trusted source.

s.gif
How is that different from the existing ntp infrastructure, which already supports encryption, integrity, and stratum one time servers?
s.gif
Can I send a hash of some content and then get a signed response from the ntp server with a timestamp?
Canonical User Identity. I know this is massively controversial but if we were starting at the beginning, an internet without anonymity would be a kinder, safer internet. One where data asymmetry and it’s enterprises wouldn’t have a toehold. A place that extends reality rather than contorts it. Privacy and anonymity have a place, and should be a right for consumption but never for creation.
s.gif
The opposite is true. Privacy and anonymity are the best protection for the weak and marginalized. They are very closely correlated to the concept of free speech. And it is the ability to think and speak freely and work through matters that differentiates a civilized free society from the barbarians.

Certainly, things like free speech, the right to a fair trial, privacy and anonymity also somewhat benefit "bad people". But unless you live a very boring life indeed, from time to time perhaps you will be the one who is going against the grain of society. (Or some small but significant segment thereof). And then you'll be glad that you have a safe harbor to fall back on.

s.gif
People would just buy some fake identity from some poor island somewhere, and governments would spit out a few billion fake people for espionage and propaganda.

And need some sort of identity management for all the automated processes to assume, which would be a huge nuisance.

s.gif
>Privacy and anonymity have a place, and should be a right for consumption but never for creation.

What makes you think so?

s.gif
Hard nope from me. No thanks. I want nothing to do with your system. Literally the only reason anyone wants to know who anyone is at all is to throw men with guns in their general direction. That is the one and only thing that the internet doesn't allow, and to be honest, I think that's for the best.
First, you have to secure all the computers. Capability Based Security for everyone. This lets us all run mobile code without danger. (No more virus or worm issues)

Next, we use 128 bit IP addresses, 32 bit port and protocol numbers, and absolutely forbid NAT as a way of getting more addresses. (no ip address shortage)

Next, all email has to be cryptographically signed by the sending domain using public key encryption. (No more spam)

Next, selling internet access [EDIT]ONLY is strictly prohibited. Either connections can host servers, or find a different business model. Any node of the internet should be able to run a small server. (No more censorship in the walled gardens) Yes, I know it's stupid to try to host a server on the upload bandwidth I get from a cable modem, but it shouldn't be prohibited. If I want to see my webcams from anywhere, it shouldn't require someone else's server.

DNS should be done using a blockchain, it would need some iteration to get it right.

s.gif
> Next, selling internet access is strictly prohibited.

Not sure if this is attainable, someone has to draw the cables and maintain them. What do you propose, that everyone does this for themselves? I don't think we'll be seeing much fiber cable, nor internet users then. What with trans-oceanic cables? In the end someone's got to pay the bill.

I do agree however that DNS should be free and decentralized. The difficulty here would be to avoid domain hoarders. Maybe through some verification / vouching system? On the other hand that seems a lot like PGP keyservers, which also didn't turn out all too great.

All in all, difficult problems. Maybe the current system isn't all too bad after all?

s.gif
I edited my entry, I hope it makes sense... access is different than connection... access forces us to be consumers, connection lets us be suppliers.
s.gif
>> next, all email has to be cryptographically signed by the sending domain using public key encryption. (No more spam)

How does this work? Would every email address need to get a public/private key? From where? Does someone get to control who can use email? How much does it cost? Do the certs expire? How do we manage the system for billions of email addresses?

I ask this with genuine interest - I'm not sure how a cert-required email system would work - or how it would help with spam...

s.gif
If I wanted to send email from [email protected], I'd simply use my DNS specified MX server, and it would sign the outbound email (I'd have to log in, for the email to be forwarded). Any forwarding server could check the signature against the MX specified server and get the public key for the domain. If the signature didn't match, the email would get rejected.

We'd have to agree on standard protocols, but all domains would manage their own keys, nothing would be centralized past the already centralized DNS system.

s.gif
Mostly agree except instead of “prohibiting” certain things the systems we build should disincentivize them.

e.x. No need to prohibit NAT if there are more than enough IPs to go around.

s.gifGuidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

Recommend

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK