55

End-to-end testing Single Page Apps and Node.js APIs with Cucumber.js & Pupp...

 5 years ago
source link: https://www.tuicool.com/articles/hit/bAnIf2E
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Single Page Apps are a popular approach to building web applications, but testing them in an end-to-end fashion is not simple; you need to load the backend (potentially a collection of APIs and databases), and make sure that the combination of the SPA and APIs works as expected.

The good news is that there is a way to do this, and in this article we will show you how, using a Behaviour-Driven-Development tool called Cucumber.js, and Google’s web browser library Puppeteer.

If you develop Node.js web applications and want to use E2E testing for them but don’t know how, then this article is for you.

What are Cucumber.js and Puppeteer?

They are two Node.js modules that focus on two very distinct areas of software, but the combination of them will allow you to test your web applications regardless of what frontend and backend libraries you are using.

Cucumber is a Behaviour Driven Development (BDD) tool that allows you to write software requirements as specifications in a human-readable format, and use those specifications to run tests that make sure that the software does what is expected.

Puppeteer is a library from Google that allows you to control Google Chrome (or Chromium) in a programmatic fashion.

We’ll start by looking at Cucumber first.

Cucumber

jeeMBfR.png!web

Cucumber is a BDD tool that allows you to test application features from the perspective of the user, and document how those features work. The idea is that the specs serve a two-fold purpose; they provide living documentation on how the application features should work, and they provide automated tests that can be used to ensure that the software works as per the features.

Let’s give an example. Say you are working on a new web application that handles a job board, and you need to include a feature to make it compliant with GDPR (such as allowing a user to delete their data from the application).

Where do you start? Well first, we need to get the product owner to sit down with the stakeholder(s) (if available) to get three items:

  • What are we trying to accomplish? (action)
  • Who is trying to accomplish it? (stakeholder)
  • Why are they trying to accomplish it? (business value)

In the context of the product feature mentioned earlier, they would be:

Allow a registered user (stakeholder) to delete their data from the jobs board application (action) , so that the application no longer has their data (business value) .

We would take these details, and start to put them into a text file, in this format:

A GitHub gist of what the Cucumber feature file would look like at this stage

The file would then be saved with a descriptive filename, that is saved with the file extension “.feature” on the end, so that Cucumber is able to identify it.

A point worth mentioning is that the text in the file is quite similar to what would be written on a user story by a scrum master — Cucumber fits quite well with being used in such a process.

At this point we’ve captured the high-level requirements, but now we need to describe the details of how the goals of the feature are achieved by the stakeholder. This involves sitting down with the stakeholder (if possible), or using the product owner as a proxy, other interested members of the team (Customer UX, Design, Development, QA), and describing the steps that are taken by the stakeholder to accomplish the goal.

You can scribble the steps down on paper, draw them on a whiteboard, or even type them into a text file on your computer — whatever process works for the team. You then collect these steps, check that you are all happy with them, and begin to put them in the Cucumber feature file that we created earlier.

We can begin to write down the steps of a registered user deleting their data from the application. First we need to group these steps in a scenario, give the scenario a name (e.g. Successfully delete my data), and then put the steps for that scenario in the file.

The format of each step is to use a key word first, and then the rest of the step. The keywords are “Given”, “When”, “Then”, and “And”. They help to structure the flow of describing the steps, and also allow Cucumber to parse the text file and identify them.

Here is how the Cucumber feature file might be at this stage:

The Cucumber scenario describes the steps from the beginning to the end, and are written in a fashion that would allow anyone to be able to read them and understand what is going on.

We would then take this file, and put it alongside the source code for our application. First we would create a folder called “features”, and then we would insert the file inside of that folder.

We would then install Cucumber as a development dependency in the application, by typing this into the command line:

npm i cucumber -D

With Cucumber installed, we can now use it to read in the feature file inside of the features folder, parse the steps, and print out code that we will need to use in order to start writing test code for those steps.

Run this command on the command line:

npx cucumber-js

This command will do the following:

  • Look for files with the .feature file extension in the features folder
  • Look for files that contain step definition functions
  • Look for a hooks.js file in the features folder
  • Look for a world.js file in the features folder
  • Look for any step definition functions that are in the step_definitions folder
  • Parse the feature files for Cucumber scenarios
  • Parse the steps in each of those Cucumber scenarios
  • Find any step definition functions that match with those steps
  • For any steps that don’t have matching step definition functions, output sample code to the command line

Here is what the output from running that command will look like:

There is quite a bit going on there.

The first line prints out the results of running each step that is found. There are 9 steps identified, but none of them can be matched to step definitions, so sample code is printed out for them. We then see at the end the number of scenarios run (1), the number of steps run (9), and the time that it took to execute the running of those tests.

We can take the code snippets that were printed out in the terminal, and put them inside the features folder. First, we will need to create a folder inside of the features folder called “step_definitions”.

Once you have done that, you can create a JavaScript file inside of that folder which contains the code snippets. It will look something like this:

The next time that you run “npx cucumber-js”, you will see that those step definitions are being matched:

You’ll now see from the output that Cucumber has matched the steps in the feature file to the steps that we put in the example_steps.js file. It has started executing them one-by-one, but has found that the first step is marked as pending, so it has skipped running the rest of the steps in that scenario. We are now in a position to start writing code for our step definition files which will execute when we run Cucumber, such as writing code that would trigger doing things inside a web browser.

We will come back to this point later. Now we will take a look at Puppeteer.

Puppeteer

Qva6Zfr.png!web

In their own words, Puppeteer is “a Node library which provides a high-level API to control headless Chrome or Chromium over the DevTools Protocol ”. It can also be configured to use full (non-headless) Chrome or Chromium.

What does that mean? It means that you can use Puppeteer to boot up Google Chrome (or Chromium), and make it do things like visit web pages, click on web page elements, and much more.

With Puppeteer, you can do these things:

  • Load a website, and start navigating around it through clicking links
  • Generate screenshots and PDFs of pages
  • Crawl through links on a Single Page App, whether for a web spider or doing some page caching for server-side rendering.
  • Fill in form fields and click buttons to automate form submission (as part of a business process automation project)
  • Capture timeline trace data of a site loading, in order to benchmark site loading performance.
  • And of course, you can use it to do automated testing of a web application.

To get started with Puppeteer, simply install it with npm:

npm i puppeteer

Once you have Puppeteer installed, you can then start to use it in your code. A good example is visiting a website (in this case Google’s UK homepage), and taking a screenshot of it. Here is what the code looks like:

The example above is nice and simple to read, thanks to Node’s async/await functionality. This is just one of the many things you can do with Puppeteer, and there are some more examples featured on their website, https://pptr.dev/ .

One of the important things worth mentioning about Puppeteer is that you can specify whether to run it in headless mode or not. Headless mode means that when Puppeteer is booting Chrome (or Chromium), you can tell it to not render any application windows on the computer. This means that you can have it running in the background on your computer without loading any browser windows. It also means that if you want to use Puppeteer to drive the web browser on a server that doesn’t have a graphical desktop environment (say running Linux without Gnome or KDE installed), then you are able to do that. This is particularly useful for using Puppeteer on testing services like Travis and CircleCI, and it means that you can use Puppeteer with server-less solutions such as AWS Lambda (which is particularly useful for doing web spidering at scale).

With this in mind, we now see that Puppeteer can be used as the browser driver for an automated testing setup, which brings us back nicely to how it can be integrated with Cucumber.

Integrating Cucumber and Puppeteer

Now that have an idea of what Cucumber and Puppeteer do, the next question that emerges is how to combine them for testing a web application?

The answer is to create a file called “world.js” inside of the features folder, and put some code in that file which will be used by the step definitions.

The world.js file is loaded once by Cucumber, but inside of it you setup a function that is called before each Cucumber scenario. The code will look something like this:

In the world.js file there are 2 important items of note — the first is that the World function is where we get puppeteer to be bound to an object called scope. This will get called before every Cucumber scenario, and it is passed to the setWorldConstructor function.

The second is that we load a file called “scope” which lives in a folder called “support”. We will want to create a folder called “support” inside the features folder, and as for the code for the scope.js file inside of it, it is simply this:

module.exports = {};

This seems odd, why export an empty object for a file? Well, the reason is that we use this for the context that can be accessed by the step definitions. When the file is loaded by world.js first, any other files that attempt to load the same file will have the state that the scope object has, as Node.js uses module caching. This provides the benefit of being able to set the state of the scope object before each Cucumber scenario, and being able to pass whatever libraries and other configuration settings are needed to the step definition functions, such as setting puppeteer as the value for the driver property on the scope object.

In our step definitions file, we can now add a require statement to the scope.js file, and now call puppeteer in our step definition functions. But before we do that, there is another thing that we want to setup — the hooks.js file.

The hooks.js file is used by Cucumber to execute functions before and after each Cucumber scenario, as well as before all and after all of the Cucumber scenarios. It is here that we can do these things:

  • Flush the database tables of data in the web application before a Cucumber scenario starts, so that we have a clean state beforehand.
  • Wipe the Puppeteer browser of any cookies after every scenario has run, so that we don’t have those cookies leak into the state of the next Cucumber scenario.
  • Close down any browser windows opened by Puppeteer after all of the Cucumber scenarios have finished running.
  • Issue a command to shut down the web app after all of the Cucumber scenarios have finished running.

Here is what the hooks.js file will look like:

The hooks.js file will allow you to manage the setup and teardown of the state in Cucumber, and we’ll show you later on how to incorporate doing that to the database tables for the web application.

This now brings us onto the next section, how to test the combination of a Single Page App and Node.js API using Cucumber.

Loading the Single Page App and API in your Cucumber tests

In traditional web applications, the web application contained both the backend and the frontend, which made doing End-to-end testing on the web application easy to setup. Step forward to today, and web applications can now consist of a separate frontend web application that works in combination with a separate backend application, the API.

In fact, recent trends have seen APIs turned into micro-services, so the API ends up being a collection of backend applications with their own dedicated databases and other systems. For the purposes of this article, we will assert that the API is a single backend application.

So how do we load the Single Page App and API in our Cucumber tests, especially if the frontend and backend parts have their own git repositories?

In this case, what I’ve done before is to have the Cucumber tests live in their own dedicated repository, and load the SPA and API via node modules. This approach works really well, and I’ll show you below what I mean. First, setup your Cucumber tests code in a separate git repository, and make sure that the repository has its own package.json file, which you can setup by running this command:

npm init

For the frontend part of your application, you might be using Angular, React, Vue, Ember, or another frontend framework. There are multiple choices, but either way most of them will have a build process where the production version of the frontend is delivered as a collection of HTML, CSS and JavaScript files. In that case, you can easily add that Single Page App as a node module to the Cucumber tests repo.

In the frontend repo, create an index.js file (you can use another filename if there is already an index.js file), and set it up like this:

Here, the index.js file is doing a number of things:

  • It is using Express to do the task of loading the frontend application
  • It is using Express to serve the contents of the build folder (the folder which contains the compiled version of the frontend application)
  • It will respond to other HTTP requests by serving up the index.html file in the build folder.
  • It wraps the express server in another library called ‘http-shutdown’. This library will allow us to gracefully shutdown the web app when we are finished with using it in our Cucumber tests.
  • We attach the hostname of the web frontend to the server, so that we can lookup this value in our Cucumber tests for Puppeteer to load the web application in the browser.
  • We finally export the server as the object returned by the index.js file.

Make sure that npm dependencies are installed for express and http-shutdown in the frontend repo. That’s the frontend setup, but what about the API?

Well, the API setup pretty similar. We need the API repo to contain a file (index.js is a good example to use) that will return the API server wrapped by http-shutdown. Here is an example for an API repo:

Again, you can see some similarities to how the index.js file looks for the web repo. The API is served as an express app, and http-shutdown is used to wrap access to it. The server object then has a host property set on it, before it is then exported by the file.

Now that the frontend and backend repos have index.js files that we can load externally, we want to include those repos as Node.js modules in our code. We don’t need to publish those repos to NPM, we can simply install them in our integration tests repository like this:

npm i PATH_TO_FRONTEND_GIT_REPOSITORY --save
npm i PATH_TO_BACKEND_GIT_REPOSITORY --save

Your Cucumber app will then have the web and api repos declared as node modules. You should see those added to the dependencies section of your package.json file:

"dependencies": {
  "api": "GIT_PATH_FOR_API_REPO",
  "cucumber": "^4.2.1",
  "puppeteer": "^1.5.0",
  "web": "GIT_PATH_FOR_WEB_REPO"
 },

You can then attach your locally checked-out copies of those repos to the Cucumber tests repo by running this:

npm link ../PATH_TO_FRONTEND_APP
npm link ../PATH_TO_BACKEND_APP

By doing this, changes you make in your local copies of the frontend and backend repositories can be checked by the Cucumber tests.

Now, how do we load the API and Web repos in our Cucumber tests? Well, we edit the world.js file, as well as the hooks.js files. First, here is how you want the world.js file to look:

In the world.js file above, we’ve loaded the web and API repos as node modules. Because both of the index.js files in each immediately boot up their servers, the act of simply requiring them takes care of this, so the Single Page App and API will be up when the Cucumber tests start running. We then pass the host for the web repo to the scope, so that we can reference that url in our Cucumber tests, and we attach the api and web repos to the scope as well, which we’ll see why later.

Now, in the hooks.js file, we want to make sure that when all the Cucumber tests have finished, we will shut down the SPA and API servers. We can do that by using this code in our hooks.js file:

The bit of interest here is between lines 22–23. What you can see here is that the api and web servers both have shutdown() called on them. This is where http-shutdown is used to shut down the servers, so that there are no more remaining process listeners, and Cucumber can exit cleanly.

Now, this is an important point. Cucumber.js will not exit its process if there are any remaining process listeners in operation. What does that mean? It means that if any of the SPA or API http servers are still listening on a port, then Cucumber will not exit. This is why http-shutdown is used to shut down those servers.

Now, if you find that Cucumber still doesn’t shut down, then the chances are that the issue here is that the process listeners still running are for a database connection, such as to PostgreSQL, MongoDB, or Redis. Those need to be shut down as well. To show you what I mean, here is a version of the hooks.js file that is used in one of our internal projects:

There are 2 changes here, but the first one of interest is from lines 41–42. Here, we’ve made calls to mongoose (an ORM for MongoDB) and redis to disconnect and end their database connections. Also, at the top of the file (lines 4–14), we’ve required the database wrappers, and we’ve also loaded the ORM models of the API, where we then use them in the hook.js’ Before function declaration to wipe clean our database tables before each Cucumber scenario. We do this so that we can guarantee that the state of the API is clean before each Cucumber scenario, so that there is no data pollution that can cause side-effects to occur in subsequent Cucumber scenarios.

We are now at a point where we have Cucumber tests, we know how to load Puppeteer for our cucumber step definitions to use, and we know how to load the SPA and API for our Cucumber tests repo to setup and teardown. What’s next?

The next thing to think about is what code to write in the step definitions, and how to structure them for readability and ease of use.

Writing the code for the step definitions

The step definitions code is where you turn the Cucumber feature files into test code that actually does things to your web application and database, and checks that everything worked as expected.

There are a number of ways that you can approach this. The strategy that I’ve come up with for organising the code is this:

  • Put the code for the step definition functions in a separate file, and export them from that file. This allows you to structure the step definition matchers in a easy to read format.
  • Give the step definition functions meaningful names that reflect the nature of what they do.
  • Use async/await in your step definition functions, because it will give you the ability to combine them for other step definition matchers, and it will make reading your files easier.
  • Use separate files to store objects that reference urls that can exist in your app, as well as CSS selectors that you want to use to select elements on a page and interact with them (or check that they exist).
  • Use scope.context to store references that can enable to check the last page that was accessed, so that you can write your cucumber feature steps in a more human-like form, rather than constantly passing around references in an explicit form.

This might seem like a handful, but I’ll note each point in the following sections. We will start with the first.

Put the code for the step definition functions in a separate file

In our internal project, the step definition functions live in a file called “actions.js” inside of the support folder, and they are referenced in the common.js file inside of the step_definitions folder. By doing this, we are able to make the common.js file nice and readable. Here is what I mean:

What you can see there is that it is pretty easy to quickly scan down the list of step definitions, which is something that you will often do when writing the code for the tests. As for the actions.js file, it will export a number of functions (there are quite a lot so we will simply show a few of them here). Below is a selection of some of the actions in the actions.js file:

One of the actions of interest here is the “visitHomepage” function. This is the function that is called at the beginning of the stakeholder’s user journey with the web application. What you will notice is that the action will check if there is a browser property set in the scope. If there isn’t, it will tell Puppeteer to launch, with settings on whether to run in headless mode, as well as the execution speed (slowMo). The function then creates a new page, and sets it on the scope’s context as the currentPage, so that other step definition functions can interact with it. It then sets the window’s width and height dimensions, tells the browser to visit the home page, and waits until the page has loaded (networkidle2).

This is how we get Cucumber to start doing automated testing for us in the browser, which becomes powerful over time as you write more Cucumber features that reuse the same steps that are defined in previous Cucumber features.

Give the step definition functions meaningful names that reflect the nature of what they do.

If you scan the naming of the functions referenced in the common.js file, you will notice that they are descriptive of what they do. The benefit of this is that when you come to use combinations of them for other step definitions, it helps you to understand what is going on. We will demonstrate a good example of this in the next section.

Use async/await in your step definition functions, because it will give you the ability to combine them for other step definition matchers, and it will make reading your files easier.

Node’s async/await functionality has been one of my favourite additions to Node.js in recent releases. Traditionally Node.js was written in the style of callbacks, but this would lead to code that would look a bit like a Christmas tree on its side, and was commonly referred to as “callback hell”. Although Node has had Promises for some time, it hasn’t replaced callbacks completely.

Step forward to today, and you can now write your JavaScript code in a way that controls the flow of execution in a nicer manner.

The benefit of this is that we can start to write Cucumber step definitions that DRY up repetitive steps in our Cucumber scenarios. For example, the process of a user logging into an app before being able to use any of its features is a common occurrence. Rather than repeat the steps for login within each Cucumber scenario, what we can do instead is simply write a step called “And I login”, and then for that step definition, write the following code:

That step definition function is expanded for the sake of readability. Here we can see the flow of actions begin taken to login. We’ve managed to combine a number of step definition functions used with other matchers, allowing us to reuses code that has already been written pretty quickly. This then enables the Cucumber feature writer to express actions in fewer steps, make the Cucumber files simpler and nicer to read.

Use separate files to store objects that reference urls that can exist in your app, as well as CSS selectors that you want to use to select elements on a page and interact with them (or check that they exist).

As you start to test more and more of your application, you will find that you start to have a number of urls and web page elements that you’ll want to access and check. At this point it’s good to break out those values into their own files inside of the support folder. By doing this, you keep the actions.js file readable. Here are the current examples of the pages.js and selectors.js files:

Defining your page routes in this fashion might seem like having duplicate references in both the web frontend and the Cucumber tests, but the benefit of doing this is that your test implementation remains agnostic of whatever framework is being used to serve the frontend application. This point will be iterated on later on.

And here is what the selectors.js file looks like:

Again, a similar concept is applied for the selectors.js file. Here we define a bunch of CSS selectors based on their property name and structure in the selectors object. It is not a perfect example, and chances are it will evolve with time, but it gives you an idea of how to approach it for now.

Use scope.context to store references that can enable to check the last page that was accessed, so that you can write your cucumber feature steps in a more human-like form, rather than constantly passing around references in an explicit form.

You might have noticed from earlier on that some of the Cucumber steps can contain variables inside of quotes. This allows you to pass variables to your step definition functions, which makes it easy to reuse Cucumber steps across your scenarios.

That said, support for that feature can lead to a tendency to express explicit checks in quotes in the Cucumber steps, which starts to make the Cucumber scenarios a bit bloated and sound more like they were written by a developer than by a stakeholder or product owner.

Because there is a way of tracking state during the execution of step definition functions (scope.context), we can use this to track what page we are currently on, whether we have a browser window open or not, and thus make our step definition functions a bit more knowledgeable. You can see an example of this in the visitHomepage action.

This is quite a hefty lot to take in, but the good news is that we’ve covered almost everything, all that remains now is to cover 2 more items.

Firstly, if you can keep your page routes and selectors as standard as possible, then it means that you are in a position to be able to swap out the frontend of the application (should you say decide to write the Single Page App in another framework), without having to rewrite the tests.

I actually did a proof of concept of this at the Ember London meet-up in June this year. I showed them an application where the SPA component had been written in React, and then I showed them an Ember application that replicated one of the features of the React app. I then showed the Cucumber tests being run for a single scenario, with the Ember app being loaded as the SPA instead of the React app. The tests passed, and I didn’t have to rewrite the Cucumber tests at all. I simply had to reconfigure which web app frontend was loaded in the world.js file.

This is important, because long-running web applications do tend to have their frontends rewritten over time. It is really good if you are in a position to be able to reuse the same code for your tests if you find yourself in such a position.

Another item I want to cover is that of being able to load your local copies of the SPA and the API in an environment for testing.

Loading your SPA and API for testing

When you’re running your local copies of the SPA and API, you’ll likely have a development environment config, so when you want to run integration tests on them, you’ll want to switch to a test environment configuration, that way you don’t clobber the state of your development environment every time you want to run tests.

So how do you get the Cucumber tests in one repo to load the SPA and API with their test environment configurations?

The secret is to use the NODE_ENV process environment variable to reference your environment, and to set it to “test” as a precondition of running your Cucumber tests. By doing this, the API will inherit the process environment variable as well, and so you can get your application to load a separate configuration for the test environment.

But how do you do that for the SPA? When a SPA is being viewed in a browser, it doesn’t access any server-side values, and so it needs to have some way of knowing that it is supposed to point to the API for the test environment.

The way you can do this (which works with React apps) is to have a build step in your SPA that does the job of generating the config that is used by the SPA. In the case of our internal project, there is an entry in the scripts section of the package.json file that is called “config”, and it does this:

“config”: “node scripts/generateWebConfig.js”,

In the scripts folder of the repo for the SPA, the generateWebConfig.js will generate the config.js file in the src folder, like this:

This file can be run to generate the config.js file for the SPA, and it can inherit process environment variables to then scope what values get output to the config.js file in the src folder.

The scripts section of the SPA repo’s package.json file is then amended to look like this:

“scripts”: {
 “build”: “npm run config && react-scripts build”,
 “config”: “node scripts/generateWebConfig.js”,

Config gets called as part of the npm run build command, which means that we can call “npm run build” on our repo and know that the config set for the app is generated based on the value for the NODE_ENV process environment variable.

So now comes an interesting question. How do we get the SPA repo to generate the config file for the test environment before it runs?

The good news is that NPM provides a way to do this. In the integration tests repo, we can leverage NPM’s explore command to execute commands inside of a dependency or devDependency. Here is how the scripts for the integration repo look:

"scripts": {
  "pretest": "NODE_ENV=test npm explore web -- npm run build",
  "test": "NODE_ENV=test npx cucumber-js --no-strict",
  "posttest": "NODE_ENV=development npm explore web -- npm run config",

What you will see here is that there are 3 commands: pretest, test, and posttest. If you call npm test from the command line on the integration test repo, NPM will first run pretest’s command, then it will run the test command, then it will run the posttest command.

In the pretest command, we set the NODE_ENV environment variable to test, and then we call “npm explore web — npm run build”. This will trigger npm run build inside the web Node.js module, which is the repo for the SPA. This is generating the web repo’s config file for the test environment, and then compiling a version of the source code to the builder folder.

We then call test, again passing the NODE_ENV=test environment variable, and call cucumber-js to run the tests.

Once that finishes, we then do a bit of cleanup in the posttest command by running NPM explore web and the config command on the web repo, but instead setting the NODE_ENV environment variable to development. We do this so that after the tests have finished running, we get the web repo back to a state for continuing local development (and to avoid initial confusion when the frontend complains about hitting a network error even though the local development API is up and running).

From our integration tests repo, we are able to coordinate setting up the web and api repo’s configurations for running tests against.

Conclusion

This pretty much covers most of what is involved with being able to test your Single Page Apps and APIs with Cucumber.js and Puppeteer.

If you are wanting to learn a bit more, there are some more details available in the talk I gave at the London Node.js user group meeting back in April of 2018. You can find slides here ( https://www.slideshare.net/paulbjensen/e2e-testing-single-page-apps-and-apis-with-cucumberjs-and-puppeteer ) , as well as see the YouTube video for that talk here: https://www.youtube.com/watch?v=5MB-jGWqoJU

If you have any questions, please feel free to email me at [email protected]

One final bit to mention, if you are wanting to use this strategy and are using CircleCI for running your tests, this recipe will be of interest: https://github.com/anephenix/puppeteer-circleci-recipe

About Anephenix

Anephenix is my software consultancy, and I’m currently available for new projects.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK