Will social media companies be required to report on terrorist activities?
A bill in Congress is looking to make social media firms responsible for disclosing any content that could indicate terrorist activity.
Companies such as Google, Facebook and Twitter could be required to monitor emails, tweets and videos for any suspicious content.
The bill comes as a result of law enforcement agencies struggling to cope with the increased use of social media to radicalise people.
The UK has recently been dealing with a number of cases of young men and women leaving to join ISIS, having apparently been groomed online.
While it is unknown whether a measure like this would be implemented in the UK, PM David Cameron has recently called for encrypted messaging services to be banned.
A growing public demand for better forms of encryption in the wake of leaks by NSA whistleblower Edward Snowden is at contrast to demands from security agencies to have better access.
TechNavio, Faisal Ghaus, told CBR: "Because data breaches can be catastrophic for an organization, encryption software is being adopted increasingly to control these violations and create a secure information environment.
"Many encryption software providers have integrated features that enhance the use of encryption solutions into smartphones or smart devices."
This bill could place the social media companies in the uncomfortable position of being responsible for reporting suspicious activity.
While to some extent these companies already comply with some requests for information, this bill would place a stronger enforcement role on them.
Many large Silicon Valley companies now produce annual transparency reports which show how many requests they receive from various countries, as well as how many they comply with.
Any policy regarding increased monitoring from tech firms would likely put the UK out of touch with EU Data Privacy regulation.
The bill in the U.S. would be an extension of existing federal requirement that are related to policing child pornography. However, weeding out terrorist messages could be much more complex.
With child pornography, the activities are typically identified by software, but with terrorist activities it is often flagged by users on the platform.
David Grannis, Intelligence Committee minority staff director, said that it wouldn’t significantly change the role companies already play. "Companies are already in the business of making a determination of what this terrorist content could be."
Twitter, which has come in for criticism from lawmakers for not taking down content, said: "We review all reported content against our rules, which prohibit unlawful use, violent threats, and the promotion of terrorism.
"Law-enforcement authorities can request information about individual Twitter accounts through valid legal process outlined on our site."