10

Application Security: Top Mistakes to Avoid | Keyhole Software

 1 year ago
source link: https://keyholesoftware.com/2022/06/30/top-security-mistakes-to-avoid-in-application-dev/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Top Security Mistakes to Avoid in AppDev

Zach Gardner June 30, 2022 Architecture, Security Leave a Comment

Developing custom applications is one of the hardest professional endeavors, and making them secure is even harder. Malicious actors are constantly changing tactics and strategies, which, unfortunately, makes it impossible to completely eliminate any security threat.

There needs to be a balance between delivering features quickly to meet business objectives and mitigating security risks. Thankfully, these two goals are not mutually exclusive. This blog post dives into the top mistakes that can be made while developing custom applications.

These recommendations are different from what would commonly be seen in an OWASP list, and they should be used in addition to whatever security practices and procedures are already in place by an organization’s infosec department. These recommendations are also written from an application architect’s (rather than an enterprise infrastructure) perspective, so most of them aren’t covered by existing security checklists.

Reusing Service Accounts

Most of the functionality we build for SPAs revolves around user-facing features. At times, background jobs or processes need to happen even when there is no currently logged-in user. This can make the authentication and authorization code difficult since all of the APIs rely on an active, unexpired authentication token to pass security checks. When this situation comes up, the default mentality is to use a service account for authentication and authorization.

Getting a new service account provisioned for this sort of headless, extra-user functionality can be difficult, especially in large organizations. There are tickets to fill out in whatever system the organization uses (e.g. ServiceNow), and there are explanations owed to both the account provisioning team about why this is needed and to the infosec or IT security teams to explain the use case and find out if anything else can be done. This non-trivial process often takes longer than the actual development of the functionality that will leverage the service account.

So, it’s easy simply reuse one service account for any and all headless calls. That, however, is a trap. It’s an easy mistake to make, and it’s commonly made, much like how water always flows to the path of least resistance. The impact of making this choice can seem small, until there is a breach of that account, and multiple use cases are put at risk.

The decision has to be made, then, to shut all of this functionality off for every different feature, potentially across multiple applications, until a new account can be provisioned. Or, if its password can be safely updated, then every place that uses the account needs to be updated. No matter which choice is made, the surface area due to the reuse of the service account is much broader than it needs to be.

Security risks are an unavoidable part of the software development process, so they should be coupled with mitigation strategies to minimize their impact. Requesting a new service account is hard, and it should be hard. These are often accounts with non-expiring passwords that have access to important business data. Being able to justify why these are needed is a valid business and IT decision.

So, rather than trying to go against the grain, go with the flow. Build provisioning of new service accounts into the feature development process. Be open and transparent with all of the teams involved that you will be requesting new ones in the future for the different use cases, explain why you are doing so, and encourage them to take the same approach with other teams. Much like any IT organization worth its weight in salt wouldn’t allow multiple users to reuse the same credentials, it’s difficult to justify reusing a service account for different use cases.

Outside of a strict security concern, any service account that sends communications to end users needs to have a proper strategy in place to manage what happens when those messages fail to send or when a user responds back to them.

It’s easy for users to think they can simply email the account back and speak to a human unless there is proper messaging in place to inform them that it is an unmonitored account. Specifying a Reply-To address, like “[email protected],” helps. Setting an Out-Of-Office message also helps set the right expectation. We recommend having at least a few members of the team that have access to the service account also add them to their email clients and periodically monitor the communications that are coming back to it. Rules on the inbox can help automatically mark messages as read, and leave only the ones that need human intervention unread.

Connection Strings With Hardcoded Credentials

Almost every application that we’ve worked on at Keyhole Software includes some API or middleware layer that connects a user-facing UI to a data persistence layer, like a database or some sort of event messaging system like RabbitMQ or Kafka. The default mechanism that these services offer for authentication and authorization is almost always through the provisioning of a “user” record within their system, and providing a connection string to the API to connect to that includes a password. Putting this connection information into an application’s configuration file (with the username and password in plain text) is, unfortunately, the norm in custom applications.

This kind of practice violates the Zero Trust principle. If an application is built to assume that there are some places they can trust to be secure, they shortcut some common sense security practices, and that is the place where the surface area increases. Especially when considering that the connection string to these services can provide an attacker access to the underlying data (the most valuable asset of the business), it is easy to see why this is one of the most common ways for security vulnerabilities to be exploited and used for malicious purposes.

The best way we’ve seen at Keyhole Software to mitigate this risk is to not use ad hoc defined usernames and passwords but to use managed identities. This relies on the registration of a service account (see the previous security mistake for more info on service accounts) in the identity provider and then the provisioning of that service account with access to the resource. This allows for a security operations team to put account authentication monitoring in place into one centralized system and lock out an account if suspicious activity is detected.

It is the unfortunate reality that some data persistence and messaging services do not allow for managed identity authentication and authorization. When that is the case, the appropriate response is to put proper management policies and procedures in place. For example, if ad hoc users or service credentials are the only way to connect to the service, put in a policy with the DevSecOps or SRE team to periodically rotate the credentials out, perhaps every 60 to 90 days.

This ensures that if an attacker does get a hold of stale credentials, the surface area is much less as the credentials will have already been rotated out. This strategy works especially well if the service can have primary as well as secondary credentials. Regenerating the secondary credentials, updating all of the applications that connect to the service, then rotating out the primary credentials allows for a zero downtime security upgrade.

Following REST Too Far

REST is among the most popular API structural patterns we’ve seen. It allows for the UI and API to have a homogenous model of how entities move between the two respective layers, reducing the probability that a bug will be introduced due to simple mistakes like a naming mismatch or a fundamental misunderstanding of how the data should be modeled.

Unfortunately, the REST model does not always translate well into a secure medium in enterprise applications. For example, in a banking application, the UI might need to offer the ability for an administrator to search for a user by their first or last name. Following the REST methodology to its logical conclusion, the API call must be a GET, so the request ends up looking like this:

This API request will likely have the URI, including the query string, included in the logs on the file system or in whatever logging system is in use. This can expose the PII in a potentially unaudited or non-auditable system, opening up the organization to legal liability. This means that the development team, and potentially the architects, will need to be aware of what can be included on the query string and what can’t. The recommended way to allow for this functionality out of the API would be something to the effect of:

{“firstName”: “John”, “lastName”: “Smith”}

Although it is a POST, we are using the verb not to create something, but to find a list of things. This is a simple example, but it illustrates an interesting architectural implication of using REST.

There are some other fundamental ways to prevent this kind of thing from happening. Using action APIs instead of REST (e.g. https://mybank.com/api/users/SearchForUsers) is one option. Another option is to use GraphQL, where more focus is placed on the body, and everything flows through a similar structure.

Rolling Your Own Authentication/Authorization

Any software team that has gone through the process of getting a piece of third-party software or a library “approved” knows that the process is painful and becomes exponentially more painful the larger an organization gets. It’s important to ensure that a custom application should only leverage dependencies that are approved and that license implications are understood. The reality, though, is that sometimes, software teams will write their own version of the dependency rather than spend the time going through the approval process.

One area this occasionally happens is when it comes to the authentication/authorization of an application. Rather than getting something approved, the development team sometimes feels the necessity to write their own security layer into the application.

This is one of the biggest mistakes that can happen in software development. Security is so hard to get right, especially for a team of software developers whose specialty is in producing features for a particular business or service line and is not entirely devoted specifically to security. Rolling your own authentication or authorization layer means the access to grab, insert, and update business data may or may not be secure.

Although it can be harder to rely on a well-tested piece of software or library to enforce security, it is critical that it be used. The ideal scenario is that the security code was written to be configurable (e.g. Spring Security); that it can be adapted to fit into the particular use case of the application. Perhaps the organization has requirements that all authorization needs to be through a custom HTTP header or only via cookies or needs to pull from an enterprise data source. As long as the security library is able to accommodate the need, it is essential to leverage a piece of tested and secure software to enforce authentication and authorization.

No Network Level Security

In the same vein as the previous recommendation, having multiple trusted layers of security is always preferable. The application needs to be secure from a coding standpoint as well as from a networking standpoint. Network-level security means that only authorized systems can even communicate with the components of the application.

This is often a great entry point for enterprise infrastructure architects to meet application architects where they are. An application can run even more securely in the cloud if there are restrictions on the individual resources as to which other components can even communicate with it over a network. Or, in an on-prem scenario, having network segmentation ensures that control can take place at a packet routing level. This should work in tandem with an application’s existing security, and should not be the only perimeter put in place to secure the data of the application.

One side-effect of network-level security is that, due to large organizations having granular RBAC, the application team may not be able to make all of the changes they need to when spinning up a new instance of the application. The application’s DevOps or SRE team might not have the ability to configure network components. This should be taken in stride and budgeted into the schedule of the application.

Knowing who the company’s network experts are is often half the battle. Understanding how they are getting pulled in a million different directions helps frame and set expectations for how to best interact with that team. The configuration of things like firewalls and virtual networks are likely skills that the application team does not, and perhaps should not, have. Specialization of responsibilities allows for large organizations to have multiple areas of expertise, and work together in (theoretical) harmony to all achieve the same goal. Networking teams providing this skill should be looked at as a required asset to any application that manages business-critical data.

Conclusion

Keyhole Software has seen firsthand the need for greater security for custom applications. The recommendations and checklists that teams need to fill out as part of regular security reviews are often written from an enterprise infrastructure architect’s perspective. We bring a level of expertise, having written hundreds of applications in dozens of different industries, to identify mistakes and mitigation strategies that only application architects can truly understand.

The recommendations in this blog post are simply the beginning of the kinds of security considerations needed to ensure that the data an application collects, often the most important IP of any organization, remains safe and secure.

If your organization is in need of outside help or guidance on the security front, reach out. We have a proven track record of helping large and small organizations alike secure their applications and networks, and we can offer the same to your team.

Send inquiries to [email protected], or fill out a Contact Form and a member of our team will be in touch shortly.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK