Why server hardening is critical for the enterprise?
Server hardening is essential for security and compliance. To ensure the reliable and secure delivery of data, all servers must be secured through hardening. Server hardening helps prevent unauthorized access, unauthorized use and disruptions in service. It is an essential part of the installation and maintenance of servers that ensure data integrity, confidentiality, and availability.
Why most servers are enforced with poor policies?
While the requirements for Server hardening comes from IT security and compliance teams, the actual work is performed by the operations teams. The process required to achieve proper server hardening involves intensive manual work for testing the policy impact in a lab environment in order to make sure hardening the servers won’t cause damage to production operations. for IT operations, the primary concern is keeping all business operations running therefore they face a conflict with the security requirement of hardening servers.
Because it’s so hard to properly harden servers without causing downtime, operating system and application misconfigurations are commonly seen, creating security flaws. This means it can be days or even weeks between the changes in recommendation of configuration hardening, the release of updates to the actual implementation. in the meanwhile, the organization is exposed.
For your enterprise this means:
* Exposure to vulnerabilities due to servers which are not properly configured and hardened
* Compliance breaches, as servers are not aligned with compliance requirements
* Exposure to auditors’ fines and comments
There are three main reasons for enterprises to struggle with hardening:
* Testing must be performed before hardening servers. hardening the servers without testing, simulating and performing a learning process can be risky to day to day operations. Testing requires a large investment in manual work.
* Any user with administrative rights can change the configuration of a hardened server, causing configuration drifts and non-compliant servers.
* Multiple policies and environments are difficult to manage.
Why hardening servers with a “weak” security policy is not enough?
Hardening benchmarks for OS and applications such as CIS/MSFT SCM provide a list of a few hundred objects that should be hardened. It is common to see organizations that enforce only a couple of dozens of these recommended policy objects. The reason for choosing a “weak” security policy is the fear of experiencing a conflict between a security setting and the server’s operations. Although IT teams might think that they are secure after enforcing these “weak” security policies we have to remember that all the objects that weren’t hardened are live vulnerabilities in the infrastructure. Vulnerabilities and security flaws resulted from misconfiguration will most likely be found by either an auditor or in a worth scenario by an attacker.
Today’s internal security threat landscape is rapidly changing. Overcoming the threats related to the basic assumption that the attacker has already penetrated our premises is extremely challenging.
Both CIS/SANS 20 security controls and the NIST cybersecurity framework recommend, that once a new server or application is installed or updated, the most important security control is to configure them with a decent security policy and ensure continuous adherence with this policy. This means hardening the servers in real-time.
Baseline hardening relay on 4 basic principles:
Collaboration between the IT operations team and the security team is essential for the success of a server hardening project. In most cases, the security team will be the one guiding the operations team about which policies to apply to different server roles and environments. The responsibility for the overall project and the actual hardening must be in the hands of the team who is managing the servers. Both teams should be actively involved in the project and communication between the teams is essential for a successful project.
* Review the security policies and make adoptions and customization that are relevant for your organization. once reviewed, the security policies should be approved by senior management. Policy changes should go through a formal procedure. It is highly recommended to discuss all the different aspects of the policy and get the input of the IT team before starting to harden the servers.
* Decide which server environment should get priority in the project- resources are always limited. You should plan to harden all your servers for optimal security, start with the most critical production servers and then move to test and dev environments.
* Project documentation is essential- make sure that you have the right policies for your servers and that you have the complete server inventory and installed applications in an excel spreadsheet.
Testing is an integral part of making changes in an IT environment. When it comes to hardening, testing is as critical as it can get. Failing to perform suitable testing will cause damage to production servers and applications. In many cases failing to perform proper testing caused IT teams to stop the hardening project or to enforce a poor baseline/policy that won’t satisfy the compliance and audit requirements.
There are three testing scenarios to cover in a hardening project:
* Most important testing- test policies before deploying them to production, this kind of testing is also the most challenging one. Hardening means making changes to production at the OS level, this kind of changes can create damage to the applications and create server malfunctions. To avoid damage, the IT operations team should create a test environment that will try to simulate the production environment Only when the changes are tested in a suitable environment (taking in mind server roles, applications, etc.) the changes can be enforced to production servers. This testing phase might take a very long time and requires large efforts and resources, this testing procedure is an ongoing one because the environment is dynamic, new applications, OS’s and policies are installed and updated frequently.
* Test servers functionality after hardening- We want to make sure that after the hardening applied everything works fine and there are no operational problems
* Post hardening we should test servers locally to make sure that they got the security policies and are now hardened according to the organizational policy.
Setting an audit team in your IT organization (if you don’t have one) is highly recommended, this can be a system administrator or a security analyst that will audit the policy of the servers every month/quarter. Make sure that if there are deviations from the policy, these deviations are reported and remediated as soon as possible.
Over the course of the last 14 years, our team at CalCom has helped to manage hundreds of server hardening projects. Either it is a small but critical server environment or large fortune 500 enterprise environments we’ve seen the same planning mistakes made by management and technical teams over and over again.
There isn’t a single product/solution that can stop ransomware but a layered security approach is needed. The first thing to do is to ensure your information is backed up offline and defenses are up. You want to be as prepared as possible. Implementing the recommendations below will ensure your defenses are up and significantly reducing the risk of a successful ransomware exploit.
The following list offers basic tips that can use as a basis for your server hardening project plan:
1. Protect your File Server against Ransomware. block ransomware’s changes to your file servers. Harden NTFS permissions and settings, monitor and block unauthorized files usage, check files integrity, creation, and deletion. Most file servers and SANs are sabotaged when an infected laptop or workstation on the network has a remote drive mapped. Locking down and monitoring the file server won’t save the workstation, but it will prevent the shared resource on the file server from being corrupted and raise an alarm.
2. Prevent saving files locally, make sure your files are stored on network storage and been backed up offline.
3.Control write-access permissions to remote files. Use Access Control Lists to specify what actions your users can perform against files. If the only permission a user account has is Read-Only, it’s not possible for ransomware that is running as that user to corrupt anything.
4. Enforce Best practices for basic NTFS permissions on a share. It is recommended to implement a tool/process which standardizes the way shares and files/folder permissions are created in the organization. Once the best practices are enforced, it is essential to actively preserve permissions degradation. Often, administrators start with a well-designed permissions structure, which, over time, is modified. This opens the potential for users to modify the permissions structure and open up security holes.
5. Configure the environment not to run unsigned Macros. Enforce and harden macro application executions, untrusted PowerShell executions, and untrusted WSH codes.
6. Enforce best practice OS baselines to reduce the attack surface. User rights, remote services, deactivate autoplay, use of strong passwords, disabling vssaexe, registry keys, etc.
7.Harden and enforce local Firewall configurations, settings and ports usage. For example, block malicious TOR IP addresses – By blocking TOR IP addresses known to be malicious
8. Implement a whitelist approach allows only specified programs to run on the organization’s computers and therefore blocks malware (for example TOR, Flash, Zip blocking). Implementing a whitelist approach at the machine level means that you have full control of the software, processes, and actions that run/performed on your servers. Implement rules that block activity such as files executing from the ‘Appdata’ directory or even disabling the ability for executables to run from attachments.
9. Restrict administrative rights and access. Managing access control from the user perspective is very hard to implement. It is recommended to implement an access control approach at the machine level from critical endpoints. This approach controls the privilege of the user’s access to/between network resources.
10.Harden and enforce browser policies. Use browser policy hardening best practices.
11. Enforce Filter spam and malicious attachments settings.
12.Antivirus- Harden and ensure antivirus is installed and up to date across all endpoints within the business. While this will not protect against zero-day exploits, many ransomware are not as developed and use older versions for which there are security software defenses.
13. Ensure data backup and secure backups. Harden the files storage location to network/cloud storage and ensure backing up that storage offline.
14. Keep those backups where they cannot be hit! An air gap between the data and the backup copy means that no ransomware, worm, hacker or other hazards can get to it.
15. Patching Verify and enforce the latest security patches for the OS, firewall, antivirus, and applications.
16. Don’t give every end-user administrator user rights. The principle of “Least-Privilege” has been recommended forever- it is hard to implement but you should try and do the basics.
Most Powerful of All: What Your People Can Do
17. Read Your Logs. Don’t ignore them, your logs provide you with the best intelligence as to what’s going on in your environment.
18. Test Your Disaster Recovery. Although fire drills are not happily accepted you should check that your DR really works and it is not just a policy. When the day will come, your users will be thankful.
19. Test Your Users. How do they react to suspicious emails and files?
20. Educate Your Users!
What is real-time server hardening and why is it critical for warding off targeted attacks?
Attackers are looking for systems that have default settings that are immediately vulnerable. Once an attacker exploits a system, they start to look for a vulnerable configuration or to make changes to the current configuration. The goal is to gain access to data and/or get privileged users’ credentials. These two reasons are why server hardening is so important. By hardening the servers in real-time you are preventing the potential configuration changes and receive alerts of any attempt to make an unauthorized change, this way attacks are stopped and compliance is maintained over time. Unlike detective and corrective methods of identifying a configuration change and notifying about a potential attack, a real-time preventive approach is beneficial both from a security and operations perspective.
By hardening all servers with standard benchmarks organizations dramatically improve the server security level and reduces the overall attack surface.
Most systems running Windows/Linux or similar, provide basic hardening options, but in order to really protect the servers a deep, managed hardening process is required.
How can you harden your servers in real-time?
CalCom CHS is a server hardening automation platform designed to help IT operation teams harden servers in a cost-effective fashion. CHS learning capabilities perform a “what if” analysis of the baseline impact directly on the production environment, therefore IT teams don’t need to go through a policy testing procedure before hardening the servers.
- Deploy the required security baseline without affecting the production services.
- Reduce the costs and resources required for implementing and achieving compliance.
- Manage the hardening baseline for the entire infrastructure from a single point.
- Avoid configuration drifts and repeated hardening processes.