A vibrant democratic republic depends on an engaged and educated populace that is equipped with the capacity to discern truth from falsehood and make wise decisions at the ballot box and in the community. While an informed citizenry is essential to America’s safety and success, when it comes to foreign actors perpetrating fraud or striving to manipulate information, government must act in the interest of the nation’s defense.

In light of recent news regarding Russia’s use of social media, specifically Facebook and Twitter, to disseminate salacious propaganda with the intent to influence the electorate, Silicon Valley should aid government in ensuring that social media platforms are being used ethically.

One proposal is to require that the sponsor of paid content is transparent and visibly identified. This could allow the public to more readily discern between content created by established media companies and the “fake news” apparently promoted by hackers to stir consternation among the public. Facebook has already announced a list of actions it will take to bring more transparency to political ads, including having advertisements link to the company or organizational Facebook pages that paid for the advertisement.

Facebook has become the target of particular scrutiny recently after it was reported that the social media giant was turning over information about advertisements paid for by the Russian military intelligence unit, known as the GRU. Now, the public knows that entities tied to the Russian government deliberately used social media platforms to push sensationalized stories aimed at manipulating voter perceptions. Additionally, these same Russian groups created fake social media accounts to share content, to create events targeted at rallying particular audiences across the U.S. and to purchase at least $100,000 worth of political advertising.

This content was seen by millions, as Facebook, Google and Twitter have become some of the largest media companies in the world. This underscores the need for Silicon Valley to consider the ethical implications of what it means for their algorithms to influence a plurality of the global public.

While greater transparency in the sourcing of paid content may be a wise place to start, adjudicating the legality of those policy reforms will take time; and, even if these companies increase the transparency of who is paying to promote content, the veneer of political action committees or shadow corporations can obfuscate who precisely is seeking to exert influence.

For example: in January, a U.S. intelligence report determined that a wealthy financier with ties to Russian intelligence financed a team of “trolls” to promote malicious content on American social media platforms. If Facebook only cited the name of the financier, or his “Internet Research Agency,” users would be unable to rapidly discern his intelligence ties.

Whatever the public policy, it’s incumbent on individuals to become informed and engaged consumers of media, to understand potential pitfalls and to gain an awareness of content that aims to manipulate. Meanwhile, these companies must work with government actors so their platforms no longer serve to enhance the nation’s polarization as social networks virally share falsified content.