It started with a simple idea: Allow the user to download a small piece of
software that would enhance his or her online experience. Programmers could use
this software to customize pages, do data validation on forms, and perform some
basic processing, removing these burdens from the server. When Sun released
their Java language and Java Virtual Machine (JVM) environment, the
possibilities became endless. Sun’s JVM was supposed to revolutionize
Internet programming. A developer would write one program that could run
anywhere the JVM was available. With the JVM available in most popular browsers,
the concept of mobile code was born. Mobile code is defined as small
pieces of software automatically downloaded into the user’s workstation and
executed without the user’s initiation or knowledge.
Mobile code can be downloaded from many sources. With the proliferation of mobile devices such as personal digital assistants (PDAs) and cellular phones capable of executing mobile code, understanding the risks of mobile code is important for setting security policies. These new devices not only have the processing power to run mobile code, but the memory to store significant information, even beyond the organization’s employee and customer lists. Put simply, there are security risks for using mobile code without appropriate controls in place.
The use of mobile code to create viruses and Trojan Horses has been well documented in the mainstream press. Less emphasized are some of the following dangers posed by malicious mobile code:
The problem with mobile code is that under the guise of providing flexible services for legitimate programmers, some technologies allow the code to have full access to system resources and services. Other technologies attempt to mediate or constrain the code to a restricted environment. For example, the following technologies could be used to write mobile code that has full access to system resources:
Technologies such as the following can be used to attempt to mediate or constrain the code to a restricted environment:
Even for those that execute in a restricted environment, however, mobile code can still have security implications because the restricted environment itself could be at risk. Despite its careful development, the Java Virtual Machine still includes a number of bugs; the implementation of a seemingly simple request can compromise security.
The most public example of the threat of malicious mobile code is the various worms that have been written in VBA. These pieces of mobile code performed various functions on the system that should not have been allowed by software downloaded from outside the environment. The lack of containment and access control allowed the worm to read the user's address book and transmit itself to other users listed in the address book.
Understanding the Basis for Policies
Mobile code should be allowed for some items but not for others. One way to implement this strategy is to create a level of trust between systems, or even based on technologies. Another way is to use code signing to identify the sender of the code. The concept is to execute only mobile code from a trusted source. The problem is that there are ways to defeat this method, and users may ignore the warnings.
Mobile code employs diverse technologies, which change as fast as users ask for new features. While it may be a good idea to “baseline” requirements based on a few technologies, doing so may hamper the ability to use those technologies to provide improved service. Additionally, users demand the functionality and the convenience provided by implementing mobile code, especially in support from many web-based services. This fact has caused many organizations to review their security policies and how they handle mobile code.
Even with the demands for functionality, the potential security problems cannot be ignored. Mobile code could be used to initiate denial-of-service (DoS) attacks, compromise information, or corrupt sensitive data. Although these problems may not happen because of a security breach, policies must be written to protect your organization’s system and networks from mobile code that may be used to compromise critical information.
Considering these problems, one way to create your mobile code policy is to analyze each of the technologies used and assign it to a risk category based on its potential threat. Each of the risk categories is then assigned to a corresponding policy. When mobile code technology changes or new technologies are requested, you can have a policy that requires the technology to undergo a risk assessment so that it can be assigned to one of your risk categories. To prevent problems, your policy should state that if a mobile code technology has not been assigned a risk category, its use on your organization’s computers and networks is prohibited.
Determining and Assigning Risk
There is no formula for assigning risk. In some organizations, ActiveX or Java applets served from trusted servers and only used within the organization's network are not a significant risk for the environment. However, others may see the use of ActiveX and Java applets as too risky unless signed or even reviewed before their use. Therefore, as you write your policy, you should understand the technologies, their classes of access controls and mitigations, and where each policy should be applied.
Before setting policies, we need to define the risk types or categories. They can be defined in any way that is meaningful to your organization. I have found that the best way to do this is to base the risk categories on how well the technology controls access to the system that will run this code. Simply, I create the following three risk categories:
Using this type of risk assessment, it is easy to categorize the various technologies. But this does not mean that you should write policies to ban all technologies that are considered to be High Risk. There are other considerations, such as execution environment, and other mitigations, such as code signing, that could provide a risk that is not a concern for your organization. A common method of showing that a risk may have a mitigation strategy or that a technology with the mitigation strategy can be acceptable can be classified with the qualifier MITIGATED.
For example, if your organization has a policy that allows for code signing of applets from trusted systems as a mitigation strategy, you can create two additional categories such as the following:
Although this sounds like a good idea, I prefer staying with the original three levels of risk and assigning policy based on context or domain. The context or domain can look at where the mobile code originated and where it is being transmitted. While we think of the transmittal of mobile code to and from the Internet as the only threat, we can also look at transmission that is interdepartmental or between various business units as possible domains that require policy considerations.
Before writing these policies, you should categorize the various technologies so that you can see how to create the policy. I am not suggesting that these categories become part of your policy. However, understanding how the technologies are being classified and where they may be used may provide informative background to the policies.
Based on the risk categories outlined above, here is how I have classified various popular mobile code technologies:
Writing Mobile Code Policies
The last time I had a discussion with a client on writing mobile code policies, they were ready to ban all High Risk and Medium Risk technologies, allowing only those classified as Low Risk. This was fine until I showed them that this would exclude Java applets and would prevent many of their applications from being used. My first suggestion was to create four categories of policies based on access:
Once access categories were decided, I suggested that for each access category, the policy be written based on the risk level. In this scenario, my client was able to create policies that allowed them to capture the nature of the acceptable controls to allow High Risk mobile code technologies within the organization's intranet. This continues until the policy is defined for all access categories.
Even with the demands for the functionality offered by mobile code, security policies must be written to protect your organization’s system and networks from malicious mobile code. Considering these problems, one way to create your mobile code policy is to analyze each of the technologies used and assign them to a risk category based on their potential threat. Each of the risk categories is then assigned to a corresponding policy for their usage. There is no formula for assigning risk. However, one of the best ways to do this risk assessment is to base the risk on how well the technology controls access to the system that will run this code. The policies can then be written based on how the user will access mobile code. Using this method, a High Risk technology can be used within the organization’s intranet with appropriate controls while denying their usage when served via the Internet.
|All questions, comments, and corrections may be e-mailed to the author||Last update: October 09, 2011|