AWS Logging Services for Operational Integrity

AWS provides extensive logging and monitoring capabilities to help ensure the integrity and security of workloads. AWS services like CloudWatch, CloudTrail, VPCFlow and AWS Config provide deep insights into the operational aspects of the system. The logs create a data avalanche that most organizations tend to ignore. This can be a costly mistake that may cause service interruptions, missed opportunities for detecting security breaches or inefficient resource utilization. However, for logging to be effective, it is essential to sort through the logs and to spot any outliers automatically, saving your IT team time and better protecting your workloads.

Tools such as Splunk or Elasticsearch provide the ability to process and analyze logs at scale and provide rich dashboards for detecting and acting on patterns. The process of viewing and analyzing logs got event easier with AWS’s recent launch of Elasticsearch as a service. Read more about logging services on the AWS cloud platform on the stackArmor Blog.

Intrusion Detection, Intrusion Prevention and Web Application Firewalls

As more and more businesses are hosted online and increasingly on cloud platforms such as AWS, it is critical to ensure robust cybersecurity defenses are in place. Typically, the security architecture for most web facing applications begins with boundary protection using a firewall. There are a number of security sub-systems such as Intrusion Detection Systems (IDS), Intrusion Prevention Systems (IPS) and Web Application Firewalls (WAF) which are generally considered as basic requirements. This is especially true for online businesses offering services in the Healthcare, Financial, Government and Commercial payments market.

As always there is a wide variety of choices and it is critical to understand the role of each one of the security systems to make an informed implementation decision. Let us begin by reviewing some of the basic definition on what these system do and the protections they provide… Read more about Intrusion Prevention Systems, Intrusion Detection Systems and Web Application Firewalls at the stackArmor Blog.

Cybersecurity is critical for SaaS CEO’s and Investors

The Identity Theft Resource Center states that over 175 million records have been exposed in data breaches in 2015 through October. The incidents of breaches continue to increase and as more businesses are offering SaaS solutions to their customers, SaaS CEOs and Investors must take a careful look at their security practices to avoid business and financial risk.

More and more businesses are offering SaaS solutions to their customers. However, many SaaS firms and their investors underestimate the importance of security as a critical component of their collective success. Many commercial SaaS operators have a “chink in their armor”. The lack of cybersecurity talent, low investment in security tools and generally weak management focus on security issues are contributory factors. This can be a costly mistake.

Changing Regulatory Landscape around Cybersecurity

US Regulators such as SEC and FTC amongst others are starting to aggressively enforce and in some cases expand their jurisdiction to cybersecurity related cases. As an example, in late August, the Third U.S. Federal Circuit Court in Philadelphia ruled against the Wyndham Worldwide hotel chain, stating that FTC has jurisdiction over enterprises and organizations that employ poor IT security practices.

“…Third Circuit Court of Appeals decision reaffirms the FTC’s authority to hold companies accountable for failing to safeguard consumer data. It is not only appropriate, but critical, that the FTC has the ability to take action on behalf of consumers when companies fail to take reasonable steps to secure sensitive consumer information.” said Federal Trade Commission Chairwoman Ms. Edith Ramirez. The FTC’s complaint stated that Wyndham had not followed security measures such as complex user IDs and passwords, firewalls and network segmentation between the hotels and the corporate network. Most of these activities are considered standard operating procedures within most large organizations… Read more at the stackArmor Blog.

ServiceOps bridging the gap left by DevOps and Application Performance Monitoring (APM)

Application Performance Monitoring (APM) and DevOps are drawing a lot of interest in the IT community that is looking for ways of delivering faster and more reliable information services. DevOps largely bridges the gap between the Development and Operational Deployment activity through automation. DevOps is enabling the rapid delivery of software through Continuous Integration and Deployment with some limited forms of largely system level monitoring. Application Performance Monitoring (APM) is another rapidly emerging technology solution that enables the ability to deliver a more reliable service through application level metrics and monitoring to help accelerate incident resolution, detect performance issues and diagnostics – APM addresses issues that traditional monitoring and logging do not adequately cover. Adoption of APM solutions has been fast furious driven in large part through easy to consume cloud enabled SaaS services such as AppDynamics, New Relic, Loggly, LogEntries, SumoLogic, Boundary and many other options including open source alternatives like Elasticsearch, Kibana and Logstash (ELK) amongst others.

However, both DevOps and APM fall short in providing a truly holistic and light-weight operations solution.

ServiceOps addresses gaps left by DevOps and APM


Hard-core systems operations and management that typically include patching, vulnerability management, backups/restore and continuous security monitoring as well as the financial management of the platform are largely left out. This is a serious gap in the current “state-of-the-art” given than typically 60-70% of a total system cost is associated with the Operations & Maintenance activity. Although, ITIL is a robust framework that got some traction in the operations management arena in the past decade, it is arguably heavy weight, considered costly and lacks agility. Just like DevOps emerged as a logical implementation level methodology to deliver agile application services through automation, ServiceOps provides an integrated and data-driven framework for platform operations that integrates with DevOps.

The key technology drivers for ServiceOps are Cloud Computing and Big Data — as the infrastructure becomes more software driven and telemetry data is easily available across the whole “stack”; we now have the ability to collect, process and actionize large amounts of data – the foundation elements of ServiceOps are in place.



ServiceOps is an implementation and delivery focused methodology that uses full-stack telemetry data and automation to help organizations deliver a reliable, secure and cost-effective IT service that is continuously optimized and includes End-User, System, Security and Financial operations.

For example in the area of pay-as-you-go cloud computing models, the ability to optimize the performance of cloud-based applications pays rich dividends in operational savings. Some organizations report being able to save up to 20% of their IaaS spend through a rigorous monthly tracking & optimization ensuring that “orphaned storage”, right-sizing VM’s,and using the right pricing model. Most IT organizations are ill-equipped and not focused on the financial aspects of cloud computing. Similarly, there are serious emerging challenges in the security operations arena – traditional security frameworks tend to be reactive in nature – the ability to perform forensic and trending analytics have been primary use cases. But with the advent of the NIST cybersecurity framework, high-profile incidents like Target and the increased cyber threat, organizations must implement real-time, automated solutions to contain the security costs and yet deliver a viable “armor” against threats.


Logging and Monitoring with Lucidworks Silk – Lucene-Solr, Kibana and Logstash

The folks at Lucidworks recently announced Silk – a logging and monitoring solution for enterprises using open source components like Solr, Kibana and Logstash.


You can learn more about their product on the Lucidworks website Clearly, the logging and monitoring space has spawned many interesting open source options, e.g. Elasticsearch is also a Lucene based solution that has integrated Kibana and Logstash. Based on discussions with early adopters, Lucidworks seems to be making an enterprise play by offering a more complete enterprise ready solution because they have a “Big Data” offering as well.