Facebook can track your browsing even after you’ve logged out, judge says

A judge has dismissed a lawsuit accusing Facebook of tracking users’ web browsing activity even after they logged out of the social networking site. The plaintiffs alleged that Facebook used the “like” buttons found on other websites to track which sites they visited, meaning that the Menlo Park, California-headquartered company could build up detailed records of their browsing history. The plaintiffs argued that this violated federal and state privacy and wiretapping laws. US district judge Edward Davila in San Jose, California, dismissed the case because he said that the plaintiffs failed to show that they had a reasonable expectation of privacy or suffered any realistic economic harm or loss.

— source theguardian.com

Hacking governments since 2011

Malware that WikiLeaks purports belongs to the Central Intelligence Agency has been definitively tied to an advanced hacking operation that has been penetrating governments and private industries around the world for years, researchers from security firm Symantec say.

Longhorn, as Symantec dubs the group, has infected governments and companies in the financial, telecommunications, energy, and aerospace industries since at least 2011 and possibly as early as 2007. The group has compromised 40 targets in at least 16 countries across the Middle East, Europe, Asia, Africa, and on one occasion, in the US, although that was probably a mistake.

Uncanny resemblance

Malware used by Longhorn bears an uncanny resemblance to tools and methods described in the Vault7 documents. Near-identical matches are found in cryptographic protocols, source-code compiler changes, and techniques for concealing malicious traffic flowing out of infected networks. Symantec, which has been tracking Longhorn since 2014, didn’t positively link the group to the CIA, but it has concluded that the malware Longhorn used over a span of years is included in the Vault7 cache of secret hacking manuals that WikiLeaks says belonged to the CIA. Virtually no one is disputing WikiLeaks’ contention that the documents belong to the US agency.

“Longhorn has used advanced malware tools and zero-day vulnerabilities to infiltrate a string of targets worldwide,” Symantec researchers wrote in a blog post published Monday. “Taken in combination, the tools, techniques, and procedures employed by Longhorn are distinctive and unique to this group, leaving little doubt about its link to Vault7.”

Exhibit A in Symantec’s case are Vault7 documents describing malware called Fluxwire. The changelog tracking differences from one version to the next match within one to a few days the changes Symantec found in a Longhorn trojan known as Corentry. Early versions of Corentry also show that its developers used the same program database file location specified in the Fluxwire documentation. A change in Fluxwire version 3.5.0 that removes the database file path also matches changes Symantec tracked in Corentry. Up until 2014, Corentry source code was compiled using the GNU Compiler Collection. Then on February 25, 2015, it started using the Microsoft Visual C++ compiler. The progression matches changes described in Vault7 documentation.

Yet more similarities are found in a Vault7 malware module loader called Archangel and a specification for installing those modules known as Fire and Forget. The specification and modules described match almost perfectly with a Longhorn backdoor that Symantec calls Plexor.

Another Vault7 document prescribes the use of inner cryptography within communications already encrypted using the secure sockets layer protocol, performing key exchanges once per connection, and the use of the Advanced Encryption Standard with a 32-bit key. Still other Vault7 documents outline the use of the real-time transport protocol to conceal data sent to command-and-control servers and a variety of similar “tradecraft practices” to keep infections covert. While malware from other groups uses similar techniques, few use exactly the same ones described in the Vault7 documents.

According to Symantec:

While active since at least 2011, with some evidence of activity dating back as far as 2007, Longhorn first came to Symantec’s attention in 2014 with the use of a zero-day exploit (CVE-2014-4148) embedded in a Word document to infect a target with Plexor.

The malware had all the hallmarks of a sophisticated cyberespionage group. Aside from access to zero-day exploits, the group had preconfigured Plexor with a proxy address specific to the organization, indicating they had prior knowledge of the target environment.

To date, Symantec has found evidence of Longhorn activities against 40 targets spread across 16 different countries. Symantec has seen Longhorn use four different malware tools against its targets: Corentry, Plexor, Backdoor.Trojan.LH1, and Backdoor.Trojan.LH2.

Before deploying malware to a target, Longhorn will preconfigure it with what appears to be target-specific code words and distinct C&C domains and IP addresses to communicate with. Longhorn uses capitalized code words, internally referenced as “groupid” and “siteid”, which may be used to identify campaigns and victims. Over 40 of these identifiers have been observed, and typically follow the theme of movies, characters, food, or music. One example was a nod to the band The Police, with the code words REDLIGHT and ROXANNE used.

Longhorn’s malware has an extensive list of commands for remote control of the infected computer. Most of the malware can also be customized with additional plugins and modules, some of which have been observed by Symantec.

Longhorn’s malware appears to be specifically built for espionage-type operations, with detailed system fingerprinting, discovery, and exfiltration capabilities. The malware uses a high degree of operational security, communicating externally at only select times, with upload limits on exfiltrated data, and randomization of communication intervals—all attempts to stay under the radar during intrusions.

For C&C servers, Longhorn typically configures a specific domain and IP address combination per target. The domains appear to be registered by the attackers; however they use privacy services to hide their real identity. The IP addresses are typically owned by legitimate companies offering virtual private server (VPS) or webhosting services. The malware communicates with C&C servers over HTTPS using a custom underlying cryptographic protocol to protect communications from identification.

Prior to WikiLeaks publishing its Vault7 materials, Symantec had regarded Longhorn as a well-resourced organization that engaged in intelligence-gathering operations. Researchers based that assessment on Longhorn’s global range of targets and its ability to use well-developed malware and zero-day exploits. Symantec also noted that the group appeared to work a standard Monday-though-Friday work week, based on timestamps and domain name registration dates, behavior which is consistent with state-sponsored groups. Symantec also uncovered indicators—among them the code word “scoobysnack”—and software compilation times—that showed Longhorn members spoke English and likely lived in North America.

Since WikiLeaks published its first Vault7 installment in early March, there has been no outside source to either confirm or refute the authenticity of the documents. The Symantec research establishes without a doubt that the malware described in the trove is real and has been used in the wild for at least six years. It also makes a compelling case that the group that’s responsible is the CIA.

— source arstechnica.com by Dan Goodin

When Apps Secretly Team Up to Steal Your Data

Imagine two employees at a large bank: an analyst who handles sensitive financial information and a courier who makes deliveries outside the company. As they go about their day, they look like they’re doing what they’re supposed to do. The analyst is analyzing; the delivery person is delivering. But they’re actually up to something nefarious. In the break room, the analyst quietly passes some of the secret financials to the courier, who whisks it away to a competing bank.

Now, imagine that the bank is your Android smartphone. The employees are apps, and the sensitive information is your precise GPS location.

Like the two employees, pairs of Android apps installed on the same smartphone have ways of colluding to extract information about the phone’s user, which can be difficult to detect. Security researchers don’t have much trouble figuring out if a single app is gathering sensitive data and secretly sending it off to a server somewhere. But when two apps team up, neither may show definitive signs of thievery alone. And because of an enormous number of possible app combinations, testing for app collusions is a herculean task.

A study released this week developed a new way to tackle this problem—and found more than 20,000 app pairings that leak data. Four researchers at Virginia Tech created a system that delves into the architecture of Android apps to understand how they exchange information with other apps on the same phone. Their system—DIALDroid—then couples apps to simulate how they’d interact, and whether they could potentially work together to leak sensitive information.

When the researchers set DIALDroid loose on the 100,206 most downloaded Android apps, they turned up nearly 23,500 app pairs that leak data. More than 16,700 of those pairs also involved privilege escalation, which means the second app received a type of sensitive information that it’s typically forbidden from accessing.

In one striking example, the study highlighted an app that provides prayer times for Muslims. It retrieves the user’s location and makes it available to other apps on the smartphone. More than 1,500 receiver apps, if installed on the same device, can get the location sent by the prayer-times app. Of those, 39 apps leak the location data to potentially dangerous destination.

Relatively small groups of unsecured apps were behind the enormous number of leaky connections. The 16,700 app pairs that exhibited privilege escalation all involved one of 33 sender apps. And the roughly 6,700 app pairs that leaked data without privilege escalation all involved one of 21 sender apps. Twenty sender apps appeared in both categories. The problematic apps came in various forms: from entertainment and sports to photography and transportation apps.

Collusive leaks aren’t always intentional—and it’s very difficult to tell when they are. But no matter the aim, leaks of sensitive information without a user’s permission carry potential for abuse.

Sometimes, only one app in a pairing may seem intentionally malicious. An app can take advantage of a security flaw in another app to steal data and extract it to a distant server, for example. Other times, both apps are poorly designed, creating an accidental data flow from one app to another, and then from the second to a log file.

The study found that smartphone location was more likely to be leaked than any other type of information. It’s easier to imagine how a user’s real-time location could be abused than, say, knowing what networks that person’s smartphone is connected to. But smaller details like network state can be used to “fingerprint” a device—that is, to identify it and keep track of what its user does over time.

When they analyzed the the final destination for leaked data, the Virginia Tech researchers found that nearly half of the receivers in leaky app pairs sent the sensitive data to a log file. Generally, logged information is only available to the app that created it—but some cyberattacks can extract data from log files, which means the leak could still be dangerous. Other more immediately dangerous app pairings send data away from the phone over the internet, or even over SMS. Sixteen sender apps and 32 receiver apps used permission escalation and extracted leaked data in one of those two ways.

— source theatlantic.com by Kaveh Waddell

Facial recognition database used by FBI is out of control, House committee hears

Approximately half of adult Americans’ photographs are stored in facial recognition databases that can be accessed by the FBI, without their knowledge or consent, in the hunt for suspected criminals. About 80% of photos in the FBI’s network are non-criminal entries, including pictures from driver’s licenses and passports. The algorithms used to identify matches are inaccurate about 15% of the time, and are more likely to misidentify black people than white people.

These are just some of the damning facts presented at last week’s House oversight committee hearing, where politicians and privacy campaigners criticized the FBI and called for stricter regulation of facial recognition technology at a time when it is creeping into law enforcement and business.

“Facial recognition technology is a powerful tool law enforcement can use to protect people, their property, our borders, and our nation,” said the committee chair, Jason Chaffetz, adding that in the private sector it can be used to protect financial transactions and prevent fraud or identity theft.

“But it can also be used by bad actors to harass or stalk individuals. It can be used in a way that chills free speech and free association by targeting people attending certain political meetings, protests, churches, or other types of places in the public.”

Furthermore, the rise of real-time face recognition technology that allows surveillance and body cameras to scan the faces of people walking down the street was, according to Chaffetz, “most concerning”.

“For those reasons and others, we must conduct proper oversight of this emerging technology,” he said.

“No federal law controls this technology, no court decision limits it. This technology is not under control,” said Alvaro Bedoya, executive director of the center on privacy and technology at Georgetown Law.

The FBI first launched its advanced biometric database, Next Generation Identification, in 2010, augmenting the old fingerprint database with further capabilities including facial recognition. The bureau did not inform the public about its newfound capabilities nor did it publish a privacy impact assessment, required by law, for five years.

Unlike with the collection of fingerprints and DNA, which is done following an arrest, photos of innocent civilians are being collected proactively. The FBI made arrangements with 18 different states to gain access to their databases of driver’s license photos.

“I’m frankly appalled,” said Paul Mitchell, a congressman for Michigan. “I wasn’t informed when my driver’s license was renewed my photograph was going to be in a repository that could be searched by law enforcement across the country.”

Last year, the US government accountability office (GAO) analyzed the FBI’s use of facial recognition technology and found it to be lacking in accountability, accuracy and oversight, and made recommendations of how to address the problem.

A key concern was how the FBI measured the accuracy of its system, particularly the fact that it does not test for false positives nor for racial bias.

“It doesn’t know how often the system incorrectly identifies the wrong subject,” explained the GAO’s Diana Maurer. “Innocent people could bear the burden of being falsely accused, including the implication of having federal investigators turn up at their home or business.”

Inaccurate matching disproportionately affects people of color, according to studies. Not only are algorithms less accurate at identifying black faces, but African Americans are disproportionately subjected to police facial recognition.

“If you are black, you are more likely to be subjected to this technology, and the technology is more likely to be wrong,” said Elijah Cummings, a congressman for Maryland, who called for the FBI to test its technology for racial bias – something the FBI claims is unnecessary because the system is “race-blind”.

“This response is very troubling. Rather than conducting testing that would show whether or not these concerns have merit, the FBI chooses to ignore growing evidence that the technology has a disproportionate impact on African Americans,” Cummings said.

Kimberly Del Greco, the FBI’s deputy assistant director of criminal justice information, said that the FBI’s facial recognition system had “enhanced the ability to solve crime” and emphasized that the system was not used to positively identify suspects, but to generate “investigative leads”.

Even the companies that develop facial recognition technology believe it needs to be more tightly controlled. Brian Brackeen, CEO of Kairos, told the Guardian he was “not comfortable” with the lack of regulation. Kairos helps movie studios and ad agencies study the emotional response to their content and provides facial recognition in theme parks to allow people to find and buy photos of themselves.

Brackeen said that the algorithms used in the commercial space are “five years ahead” of what the FBI is doing, and are much more accurate.

“There has got to be privacy protections for the individual,” he said.

There should be strict rules about how private companies can work with the government, said Brackeen, particularly when companies like Kairos are gathering rich datasets of faces. Kairos refuses to work with the government over concerns about how his technology could be used for biometric surveillance.

“Right now the only thing preventing Kairos from working with the government is me,” he said.

— source theguardian.com by Olivia Solon

7 in 10 Smartphone Apps Share Your Data with Third-Party Services

More than 70 percent of smartphone apps are reporting personal data to third-party tracking companies like Google Analytics, the Facebook Graph API or Crashlytics. When people install a new Android or iOS app, it asks the user’s permission before accessing personal information. Generally speaking, this is positive. And some of the information these apps are collecting are necessary for them to work properly. But once an app has permission to collect that information, it can share your data with anyone the app’s developer wants to—letting third-party companies track where you are, how fast you’re moving and what you’re doing.

— source scientificamerican.com

Google Chrome Listening In To Your Room Shows

Yesterday, news broke that Google has been stealth downloading audio listeners onto every computer that runs Chrome, and transmits audio data back to Google. Effectively, this means that Google had taken itself the right to listen to every conversation in every room that runs Chrome somewhere, without any kind of consent from the people eavesdropped on. In official statements, Google shrugged off the practice with what amounts to “we can do that”.

It looked like just another bug report. “When I start Chromium, it downloads something.” Followed by strange status information that notably included the lines “Microphone: Yes” and “Audio Capture Allowed: Yes”. Without consent, Google’s code had downloaded a black box of code that – according to itself – had turned on the microphone and was actively listening to your room.

— source privateinternetaccess.com

Chicago police keep nearly 400,000 on a secret watch list

The Chicago Police Department (CPD) keeps nearly 400,000 people on a secret watch list, according to a recent Chicago Sun Times report and analysis. The list is being utilized as surveillance and monitoring tool to crackdown on large sections of the working class in Chicago. The drawing up of the SSL database echoes the operations of the CPD’s notorious Red Squad, which surveilled, infiltrated and disrupted a wide array of political organizations beginning in the late 1880s and continuing throughout much of the 20th century.

— source wsws.org