32

I don’t understand micro-frontends.

 5 years ago
source link: https://www.tuicool.com/articles/VFRNRna
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client
by2a2iF.jpg!web
Brilliant micro-frontends joke :joy:

I don’t understand micro-frontends.

Yesterday, after coming back from the walk with my dogs, I have seen a few notifications on Twitter where people tagged me asking to share my thoughts on the thread started by Dan Abramov regarding micro-frontends :

yaiMjuE.png!web

If you are following me, you know I’m very passionate about micro-frontends and I’m working with them for a while, I’m also keeping an open mind analysing different approaches and understanding their PROs and CONs.

If you don’t follow me and you are curious about the topic from a technical point of view just check myMedium page, otherwise, there are many other resources on micro-frontends, just searching on Medium or using your favourite search engine.

I won’t be able to cover all the topics discussed in the Twitter’s thread but let’s start from the beginning and see if I can help to add a bit of context (you are going to hear this word more and more during this article:grin:) to the micro-frontends topic.

Disclaimer

First of all and foremost, I’m not writing this post for blaming or attacking anyone or even for starting a social flame, I respect any point of view, sometimes I share the same point of view of other people and sometimes not, this behaviour brings on the table innovation and new ideas so I’m totally up for it.

Considering a few people mentioned my name in this is tweet started by Dan, I’d like to share my thoughts because I truly believe we can have a genuine discussion about micro-frontends with great benefits for everyone covering common questions that I receive weekly on socials, my personal email, after my presentations and so on.

Other people got in touch with me regarding the aforementioned tweet: I didn’t reply straight to the tweet because discussing an interesting topic like this one in 280 characters is really limiting and prone to be misunderstood or omitting some important details.

z6JJFbi.jpg!web
INvuYna.jpg!web

Why micro-frontends instead of a good component model?

yaiMjuE.png!web

Components are definitely valid solutions, many companies are using them every day with great success, but are not the silver bullet for everything; the project, the team, the business, but more in general, the context has to fit otherwise we try to fit a square into a circle and we don’t always get what is expected.

Exploring new possibilities, challenging our beliefs and “standard way of doing things” move our industry forward consolidating existing standards or introducing new ones.

Let’s start with this, Micro-frontends are not trying to replace components , it’s a possibility we have that doesn’t fit in all the projects like components are not the answer for everything.

Use the right tool for the right job, that should be our goal.

I’ve seen large organizations with terrible codebases and practices having a successful product and I saw also the complete opposite, we cannot look only to one side of the coin.

So far I tried micro-frontends only at scale (roughly 200 people — frontend and backend engineers — working on the same project), in conjunction with microservices and team ownership are working pretty well compared to the previous model we had in my company.

Are they working in smaller projects? Ideally, but I’d like to try them first.

On paper, everything looks fine, it’s getting into the details where you realise the limitations and find new challenges. If you have any experience I’d love to hear from you!

Regarding micro-frontends, there are different flavours, for instance, we can use iframes for composing a final view, or instead use Edge Side Include or Client Side Include , even use a pre-rendering strategy like Open Components or like Interface Framework and cache the results at CDNs level. 

Another approach is using an orchestrator that is serving SPAs, single HTML pages or SSR applications, the orchestrator can be on the edge, on the origin or client side, an example of orchestrator could be Single-SPA .

Those methodologies suggest that we have 2 main approaches to identify a micro-frontend size:

  • part of the user interface that could correspond to a component but not necessary mapped 1 to 1 with a component
  • an entire business domain that could correspond to a SPA, a single HTML page or a SSR application

Each of them have their PROs and CONs, I personally prefer the latter one but it’s not bulletproof either, it’s important to understand the limitations of each approach and if those limitations could impact the final project outcome.

Micro-frontends are definitely a technique that impacts your organization, providing decoupling between teams, avoiding too much centralisation and empowering the teams on taking local decisions .

This doesn’t mean those teams are not capable to agree on a strategy to pursue or an API contract for instance, micro-frontends enable a team to take a path that can follow without the need to coordinate with other teams each single technical decision that could affect the codebase, allowing them to fail fast, build and deploy independently, following some boundaries defined by the organization (languages supported by the company, best practices and so on).

I personally worked in different organizations where new joiners provided good insights on how to change a “core library” but often for politics or because the change would not be provided with an immediate benefit, those suggestions were parked waiting for their turn inside the backlog.

Decentralising decisions to a team is probably one of the best things a company can do because this team live and breath with the product team and the business experts, talking the same language every single day, they are on top of the game, centralising instead lose the context and provides some constraints that sometimes are unnecessary.

When a company is capable to provide some technical boundaries to follow inside a specific business domain, a team can express itself in the best way possible, maybe making some mistakes but recovering very fast because the scope of work to change is smaller than a full application.

AJJnY37.jpg!web

Understanding the context

yaiMjuE.png!web

Dan is right in his example but is not looking to the context where the conversation was, he’s trying to generalise a solution that has to work for everything and every one, this is not the case.

The context of any decision is probably the most important thing for understanding a technical implementation made by an individual contributor, a team or an organization.

In the last decade, I have seen many projects written in similar ways, same architecture, similar patterns… but with different outcomes and different challenges faced during the journey, as I said before, software development is empirical, not scientific .

Nowadays there is a better understanding of which approaches we could use for delivering a project successfully, we don’t use anymore a framework or an architecture that fit them all, we are trying to use the right tool for the right job.

If a project should use heavily shared components and the project is successful it’s absolutely fine, probably the final outcome of the project, the environment, the actors involved and the process established for delivering the project make a shared components library a suitable solution.

At the same time, other contexts may require different approaches, thinking outside of the box because traditional methods are not providing predictable results.

The context is the key, understanding the business, the environment where we operate, the result we are aiming for are all linked to our context.

Therefore having a components library that abstracts the functionalities of hundreds if not thousands of components is perfectly fine like having multiple SPAs where the code is duplicated instead of being wrapped in a library or many of them.

The context forces us taking decisions that sometimes are not what other people expect, we have learnt many rules/guidelines in the past like DRY (Don’t Repeat Yourself) or DIE (Duplication is Evil), those are perfectly applicable but are not a dogma that we need to respect no matter what because sometimes there are good reasons why we are doing that.

Don’t get me wrong I’m not advocating that duplicating tons of code is a best practice but sometimes is a necessary evil for moving forward faster.

Code duplication could make our teams more autonomous because they are not sharing code that could become more complex due to abstraction and they are not dependent by external teams.

As always, we need to be thoughtful on duplicating code, the context allows us to make a decision if abstracting code over duplicating is sensible or not.

FJJNVn7.jpg!web

Often abstractions are way more expensive than code duplication and if you apply the wrong ones at scale we can generate complex code that leads to a lot of frustration that translates to worse behaviours like ignoring the centralised approach in favour of a more minimal and “fit for purpose” approach implemented by a team inside its own codebase with the result of less control of the overall solution.

So YES, let’s avoid code duplication but be balanced in your decisions because you would be surprised how certain things could improve if you address them in the right part of your application.

IZrqAnz.png!web
zQneM3B.jpg!web

Multiple tech stacks

YJrYZzJ.jpg!web

I fully agree with Federico here, being able to choose whatever technology we want could be a recipe for disaster… what if we use only the best part of it?

Micro-frontends are not imposing different technology stacks, the fact they enable this approach doesn’t mean we need to follow it.

Like in the microservices world, we don’t end up with 20 different languages in the same system because each of them is opinionated and brings their own vision inside the system, maintaining different ecosystem is very expensive and potentially confusing without providing too many benefits.

But a trade-off could help out, having a limited list of languages or frameworks we can pick from can really help.

Suddenly we are not tightly coupled with one stack only, we can refactor legacy projects supporting the previous stack and a new one that slowly but steadily kicks into production environment without the need of a big bang releases (see strangler pattern ), we can use different version of the same library or framework in production without affecting the entire application, we can try new frameworks or approaches seeing real performances in action, we can hire the best people from multiple communities and many other advantages.

When we have the capability of using multiple stacks, having some guidelines really help to have great benefits out of it, disadvantages or disasters become the reality only when there is chaos and not a common goal.

Watch out the bundle size

yEZ7NzV.jpg!web

I have great esteem for Dan Shappir, I attended his workshop in San Jose during Fluent Conference last year.

He provided tons of good insights on how to optimise our web applications, absolutely a master on performance optimisation.

I think the comment Dan shared here it really depends (again) on the context, working with micro-frontends and slicing the application in multiple SPAs for instance, would allow downloading only part of the application, splitting the libraries from the application codebase allow us to increase the TTL on the CDN serving the vendor file having quick roundtrips if needed for the users, also browsers enhance their caching strategies serving files directly from the disks instead of performing multiple roundtrips.

Last but not least, service workers could mitigate this problem with a caching strategy for the dependencies if it’s sensible for the use case.

Now, it’s inevitable that if we do a bad job bundling our dependencies this impact the load time, but it’s not impacting micro-frontends only, it could impact SPAs as well.

Potentially with micro-frontends, you can also share dependencies (take a look at Single SPA ) or you can not, in the latter case the reason could be how your application is used, for instance, if we understand the users’ behaviours in our application, we can “slice” the application in a way that our users are consuming a “journey” inside a micro-frontend and start a new one inside another one.

What we could discover then is that our users come to our platform performing one journey at the time and in this case they are going to download only the dependencies and the code needed in that micro-frontends and not all the dependencies used inside the entire application.

It’s also true, it could be the user navigates randomly in our application and therefore he’s going to download multiple times some dependencies but in that case, it’s up to the teams relieving his journey and improving the performances for providing a better experience.

Pareto principle (or 80/20 rule) states: “… for many events, roughly 80% of the effects come from 20% of the causes”.

API management with micro-frontends could be challenging but not less than working with other architectures, in our case we are moving from a services dictionary where we list all the APIs available to a list dedicated for each micro-frontend, it requires a bit of more work but it optimises payload shared between server and client showing only the APIs interested to that micro-frontend.

Obviously, is not always possible to have APIs related only to one micro-frontend and in this case, we need to have external dependencies and communication between teams, but it’s very limited compared to a day to day work.

What I want to stress here that micro-frontends are not perfect either but a good combination of practices could allow our projects to be delivered with high standards: the right tool for the right job, remember?

In Summary

I truly believe that using the right tool for the right job is essential nowadays, monolith, microservices, components, libraries, micro-frontends are tools and techniques for expressing ourselves, our intentions that are just one side of “the solution” (the technical side), the other is obviously the business impact generated using those techniques and tools.

Micro-frontends can really help an organization to move faster, innovate inside a business domain and isolate the failures, at the same time I’m not against of any form or shape of monolith application, I’m not (totally) against centralisation, despite often I saw libraries of any sort optimise way too upfront without really following where the business was going, adding a level of pointless abstraction that slowed down the developers productivity instead of accelerating it.

Often centralisation causes team frustrations because external dependencies are difficult to be resolved considering a team cannot affect too much the work of another one.

I know there are ways to mitigate this problem with inner sourcing , I cannot provide many insights on this approach, but from the few talks I saw and the chat I had with some engineers that are using inner sourcing in their companies, it could be definitely a good approach for having a shared responsibility on different codebases, if you have experience with it, feel free to comment this post.

Taking balanced decisions is the secret ingredient for success.

Last but not least, bear in mind that the

context is the key to understand a decision.

Architects are often writing the ADRs ( Architecture Decision Record ), those are documents that are helping anyone in the company to understand why a decision was made describing the context, the options available, the chosen one and finally the consequences generated by this decision.

Too often I saw people judging other companies or colleagues decisions without understanding the context where that decision was made, in reality, the context is even more important of the decision itself, because despite it could sound horrible or totally inappropriate, in reality, could have been the best (or only) option for that specific context.

BbiANj7.png!web

As usual, I’m open to discussion and I’m sure people will disagree with some of the points shared in this post, but that’s all point of sharing our experiences and beliefs!

7viaE3M.jpg!web
7viaE3M.jpg!web
7viaE3M.jpg!web
7viaE3M.jpg!web

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK