Friday, 20 October 2017

New Study Reveals Brands Fail to Use Customer Data to Deliver Personalised Digital Experiences

Sitecore, the global leader in experience management software, today released results of a global study[1] conducted in partnership with Vanson Bourne,  to understand how brands are managing the data they collect from consumers, securing and analysing it, and using it to deliver a more personalised customer experience.

The research, which included 50 marketing and IT decision makers, and 500 consumers in the UK, found that while brands face pressure to be data-driven, and while 66% of UK respondents place a high priority on personalisation, they struggle to manage and mine customer data to both inform customer experience strategies and deliver on the promise of personalisation.

An overwhelming 98% of UK consumer respondents believe that there is such a thing as ‘bad personalisation’, with UK consumers particularly frustrated by poor personal touches. They cite as examples; brands using out-of-date information about them (66% in the UK compared to 59% globally), brands that get personal customer details wrong (63% in the UK compared to 57% globally), and brands making assumptions about what consumers want based on single interactions (64% in the UK, compared to 54% globally).

Overwhelming data

For brands, poorly personalised experiences are often the result of an overwhelming amount of data and the complexities that arise around managing it. On average, brands say they’re collecting seven different types of data about online customers, ranging from transactional details to behavioral insights and trends. Yet almost a fifth (18%) of UK brand respondents point to a lack of skills needed to properly use or analyse the data collected, and 42% don’t have the capabilities to integrate data collection. Only 18% have the ability to collect online data on an individual (vs. consumer segment) level.

“Customers are openly providing insight for brands to understand their wants and needs, but brands are struggling to follow through on their end of the deal,” said Scott Anderson, CMO of Sitecore. “The level of expectation that today’s consumer has, coupled with the level of dissatisfaction brand marketers have with the tools and resources available to them, suggests brands must take urgent action to improve their ability to collect, connect, analyse, and act on customer data.”

With pressure from all sides to use data more effectively, many organisations don’t have the appropriate tools and knowledge they need to move forward and meet the expectations of their stakeholders, and more importantly their customers. Without addressing these internal obstacles, brands are missing out on the actionable insights that could enhance the customer’s experience and overall, increase loyalty and sales.

Additional research highlights include:

  • Customers think brands know more about them then they do: Customer respondents (63%) thought brands knew their purchase history more than brand respondents said they were collecting (40%).
  • Many brands struggle with existing analytics solutions: Only 18% have the ability to collect online data at an individual level, and though 58% of brands report using digital analytics software, nearly two thirds (62%) say they’re not completely satisfied with their current solution.
  • Brands crave more insight about their customers: When asked what they most want in a customer intelligence solution, just over half indicate both the ability to view customers on an individual level and real-time insights into customer behaviour (both 54%), and 48% want automated responses based on customer actions.

Download the complete survey findings here, or to keep up to date with news from Sitecore Symposium 2017 happening October 16-19 in Las Vegas, visit here or follow the hashtag #SitecoreSYM.

About the research 

Contextual Intelligence research commissioned by Sitecore and conducted by Vanson Bourne from February 2017 to April 2017, consisted of interviews with 680 marketing and IT decision makers and 6,800 customers across 14 countries including the UK, France, Germany, Netherlands, Denmark, Sweden, UAE, the US, Canada, China, India, Japan, Singapore and Australia. Vanson Bourne is an independent specialist in market research for the technology sector. Their reputation for robust and credible research-based analysis is founded upon rigorous research principles and their ability to seek the opinions of senior decision makers across technical and business functions, in all business sectors and all major markets. For more information, visit www.vansonbourne.com.

About Sitecore

Sitecore is the global leader in experience management software that enables context marketing. The Sitecore® Experience Platform™ manages content, supplies contextual intelligence, automates communications, and enables personalised commerce, at scale. It empowers marketers to deliver content in context of how customers have engaged with their brand, across every channel, in real time—before, during, and after a sale. More than 4,900 brands––including American Express, Carnival Cruise Lines, easyJet, and L’OrĂ©al–– have trusted Sitecore for context marketing to deliver the personalised interactions that delight audiences, build loyalty, and drive revenue.

 

The post New Study Reveals Brands Fail to Use Customer Data to Deliver Personalised Digital Experiences appeared first on IT SECURITY GURU.



from New Study Reveals Brands Fail to Use Customer Data to Deliver Personalised Digital Experiences

Marrying machine and human threat intelligence for ultimate security

Despite the gloomy cyber attack headlines, many organisations are moving along the cyber security maturity curve and the adoption of intelligence led security strategies has increased. One of the main drivers is the sheer volume of data that comes in and out of a business, which makes it difficult to divulge actionable insight. A lot of data that is not conveyed in the right way can be just as bad as not enough and this is the situation that many companies find themselves in, resulting in threat overload. It comes as no surprise that one in three (32%) security professionals indicate they lack effective intelligence to detect and action cyber threats according to a recent survey[1].

Unfortunately, many security teams are not optimised to deliver on this volume of threat intelligence and are often over-worked, spending far too long doing the (very necessary) simple, basic tasks, but never stepping back to look at what’s going on at a macro level. Many strategies are still in their infancy, more reactive than deliberate. But threat intelligence can no longer be seen to be adding to the big data problem, or just providing tactical indicators.

Security teams must get the people-process-technologies triangle right. When considering which tools to invest in, they should be looking for technology that can assimilate both human-readable and machine-readable information into one easy-to-consume resource. As well as analyse threat data from multiple sources in real time, enabling analysts to quickly and easily assess whether to take defensive action. This potentially reduces the window of vulnerability down to a matter of hours and minutes.

Businesses can therefore identify the threats they must take notice of, which gives them actionable and relevant insight. This automates processes, ascertains valuable outcomes and helps to find insights, which is essential. Such solutions begin by cataloguing information about the identities, motivations, characteristics, and methods of attackers. This knowledge is put in context against real-time activity to identify invasive behaviour with evidence-based knowledge. Customisation is also possible, tailoring tools to suit any network, as threat alerts should be informative, not just alarming. For example, enabling you to discover whether your data is the object of someone’s desire or if your network was simply unlucky.

All of this automation is imperative but the reality is that cyber actors are people too. Human intuition and human intelligence collection (HUMINT) are crucial, as they contextualise threat data into useful and actionable outcomes. Such useful context includes geo-political circumstances, economic struggles, or attacks that are made public that have impacted another industry or organisation. This results in broader visibility and enrichment to existing intelligence collection mechanisms.

It is therefore important to have a robust security team, but also, when choosing the right technology partner ensure that you know the individuals behind the tools. It’s essential they can provide help with both the equipment and people side of intelligence. As well as aid to curate data in a way that is useful to each individual company, contextualising adversaries specifically to an environment and filling any skill gaps. Or during an incident add extra layers of capabilities such as utilising multi-lingual expert security analysts.

In the case of threat intelligence providers, labs teams are continuously monitoring malicious activity on a global scale. While deep and dark web specialists can garner in-depth insights from the murky underworld of the cyber-criminal. This gives you access to more privileged conversations, tools, techniques and exchanges. Adding another human aspect of intelligence.

A DIY Deep Dark Web Service of your own just isn’t possible. Threat actors come from myriad locations across the globe, the linguistic and cultural barriers are huge and penetrating the relevant communities requires extensive trust. Many communities are invite only, so appropriate anonymization practices are required, and as you can imagine, threat actors are constantly on the lookout for “moles”. Building trust and respect takes time and so a third party that can do this is an essential part of a robust security posture to help navigate the murkier side of the web. Subsequently you’ll gain further contextual information that will help to understand the bigger picture of a threat.

As the threats posed by cyber criminals continue to grow, you must simplify the noise of data to find the threat intelligence that is relevant and actionable for your organisation. Disseminating the influx of information, analysing vast volumes of data in real time, and applying both machine and human intelligence to help prioritise malicious activity. You need a deliberate strategy that enables you to be the commander of cyber threats, no longer just mowing the lawns and trying to keep the bad guys out.

[1] Survey of 153 attendees, representing a range of industries, conducted by Anomali at InfoSecurity Europe, June 2017

 

Richard Betts, Head of International Financial Services at Anomali

The post Marrying machine and human threat intelligence for ultimate security appeared first on IT SECURITY GURU.



from Marrying machine and human threat intelligence for ultimate security

Lastline Again Receives Highest Achievable Security Effectiveness Score in 2017 NSS Labs Breach Detection Systems Group Test

Lastline Inc., the leader in advanced network-based malware protection, today announced that for the second year in a row it achieved 100 percent security effectiveness in the 2017 NSS Labs Breach Detection Systems Group Test. The combination of extraordinary Security Effectiveness and a low total cost of ownership earned Lastline a “NSS Recommended” rating for the third year in a row.  Prior to Lastline achieving 100 percent detection in last year’s Breach Detection test, no other product had achieved this result in any NSS test.

NSS Labs conducts independent, real-world testing of the malware-based threats faced by organizations, including drive-by exploits, social media exploits, and threats targeting web and email traffic. Each year NSS Labs increases the sophistication of the threats and the level of evasion techniques employed. This year’s test included seven products from six vendors and had a significantly lower average Security Effectiveness score compared to last year’s test. Despite the elevated sophistication of this year’s test, Lastline detected every single piece of malware.

“We are very pleased with the test results,” said Chris Kruegel, Lastline co-founder and CEO. “NSS Labs is the recognized leader in independent security product testing and can create a testing environment that mimics the sophisticated threats that criminals use to breach networks. Deep Content Inspection™ is the detection technology that excelled in this year and last year’s test, and is at the core of all Lastline products. Combining years of experience and our uniquely talented engineering team have again demonstrated our ability to detect all behaviors engineered into any piece of malware, and our ability to distinguish between malicious and benign activity. Excellent detection combined with an extremely low false positive rate means that scarce incident response teams are not wasting time following up on false alarms, but instead are using our high-fidelity alerts to respond to real threats.”

Security Effectiveness is only part of the equation that results in improved enterprise security.  In addition, Lastline has embraced a cloud-based architecture that results in a lower Total Cost of Ownership (TCO). The low TCO calculated by NSS shows that organizations can afford to deploy unmatched breach detection across their entire network, instead of only at certain locations. With Lastline, IT organizations conserve their scarce security resources while improving breach detection and response.

“Breach Detection Systems are a must-have technology for any organization needing to defend against malware infections and data loss. Threats are becoming increasingly sophisticated, and as a result, this year’s Breach Detection Systems test was significantly more difficult than prior years’ tests,” said Vikram Phatak, CEO at NSS Labs. “Lastline Enterprise achieved a 100 percent Security Effectiveness rating, having detected even the most sophisticated attacks and evasion techniques, and should be on the short list for anyone looking to purchase a breach detection solution.”

To view the 2017 NSS Labs Breach Detection Systems group test results, visit www.lastline.com/nss2017

The post Lastline Again Receives Highest Achievable Security Effectiveness Score in 2017 NSS Labs Breach Detection Systems Group Test appeared first on IT SECURITY GURU.



from Lastline Again Receives Highest Achievable Security Effectiveness Score in 2017 NSS Labs Breach Detection Systems Group Test

Personal data of millions of Malaysian citizens allegedly for Sale online

Personal data of millions of Malaysian citizens is apparently up for sale online- it could potentially be the biggest breach in the countries history.

View Full Story 

ORIGINAL SOURCE: IBTimes

The post Personal data of millions of Malaysian citizens allegedly for Sale online appeared first on IT SECURITY GURU.



from Personal data of millions of Malaysian citizens allegedly for Sale online

New form of Malware discovered by Security Researchers

A new form of ransomware has been discovered, and is being distributed in the same way as one of the most successful types of file locking malware.

View Full Story 

ORIGINAL SOURCE: ZDNET

The post New form of Malware discovered by Security Researchers appeared first on IT SECURITY GURU.



from New form of Malware discovered by Security Researchers

Fancy Bear Hackers rush to Exploit Flash bug

Russian hacking group, the Fancy Bear’s, are rushing to exploit the recently disclosed Adobe Flash bug before patches are widely used.

View Full Story 

ORIGINAL SOURCE: IBTimes

The post Fancy Bear Hackers rush to Exploit Flash bug appeared first on IT SECURITY GURU.



from Fancy Bear Hackers rush to Exploit Flash bug

US behind China in Bug Reporting

The US is beginning to fall behind China in terms of the speed which org’s are alerted to security vulnerabilities, according to Research Future.

View Full Story 

ORIGINAL SOURCE: The Register

The post US behind China in Bug Reporting appeared first on IT SECURITY GURU.



from US behind China in Bug Reporting