SINGAPORE (Nov 12): Reid Hoffman, who co-founded LinkedIn and was chief operating officer at PayPal before that, has come out to say he is for “some regulation” in the tech industry — provided it does not stifle innovation and jeopardise the US tech industry’s leadership.

In an interview on Bloomberg TV, broadcast on Nov 8, Hoffman acknowledged that the responsibilities of the tech sector have changed. “We’re not just the challengers and the disruptors. We’re becoming part of the infrastructure,” he said. “We have to be transparent. We have to be disclosive... we have to more careful with which risks we take, because we have a greater impact with that.”

Hoffman’s comments come amid increasing concerns about how tech and social media platforms have come to dominate our lives, and with that the almost automatic collection of data from users, and the repercussions.

There are also other growing dangers. This week, for instance, Facebook admitted its platform was used to incite real violence in Myanmar against the persecuted Rohingya minority. The social network also said it shut down more than 100 accounts that it suspected of being linked to efforts to influence the midterm elections in the US. Earlier this year, Facebook was already under fire when it emerged that a data mining firm, Cambridge Analytica, used data from millions of Facebook profiles as part of campaigns to influence the outcomes of the US presidential election and Brexit vote in 2016.

Most of the proposals to deal with these problems revolve around some form of regulation on the tech companies. Yet, it is not just the tech companies and corporates that are collecting personal data; states are also able to amass information on individuals — from names and other identifiers to travel patterns and movements across town. How will regulation be then?

Experts from both the industry and government have also brought up the need for  transparency — which includes giving people the full details of what personal data is being collected, how it will be used and if they are agreeable to it. In Europe, for instance, the General Data Protection Regulation, which came into force in May, allows people to withhold consent for certain uses of their data and request that their data be wiped off sites.

It would seem that the tech industry itself is beginning to realise its responsibility to society. Facebook, for one, has outlined its efforts to prevent further misuse of its platform to incite hatred and violence. In March, Microsoft president Brad Smith called for standards of accountability, and a “Hippocratic Oath” among artificial intelligence (AI) developers to “do no harm”.

Earlier this year, Harvard University and the Massachusetts Institute of Technology started jointly offering a new course on ethics and governance in the field of AI. The syllabus will cover algorithmic decision-making and machine learning, as well as the effects of AI on the dissemination of information. It will also include “the search for balance between regulation and innovation” and, significantly, “questions related to individual rights, discrimination and architectures of control.”

And soon, Stanford University, in the heart of Silicon Valley and whose alumni lead some of the biggest tech companies, is set to inject ethics into its teaching and research programmes.

Perhaps the issue is not about whether there should be more regulation. It might be better instead to impress upon companies the need to be transparent and ethical in their conduct, in order to avoid onerous rules, which could hurt even the deepest pockets. As Hoffman suggests: “If you’re not being disclosive enough, then we’ll pick regulation to make you disclose.”

Sure, we worry that companies will do their best to not disclose problems and transgressions. But in this world of instant information access and broadcast, it would be hard to hide.

This story appears in The Edge Singapore (Issue 856, week of Nov 12) which is on sale now. Subscribe here