While the definition of “smart city” is still under debate, one thing is indisputable: the technologies used to make smart cities a reality are currently acquired and deployed after very little (or even no) security testing.
Cesar Cerrudo, CTO at IOActive and board member of the Securing Smart Cities initiative, says that city governments – the buyers of these technologies – often blindly trust vendors when they say that their products are secure.
They ask vendors to fill out a questionnaire containing questions such as “Does your product use authentication?”, “Does your product use encryption?”, “Does your product …?” but don’t bother to check whether the answers are true or whether everything works as it should and security features are strong and don’t sport glaring vulnerabilities.
Some governments, on the other hand, have good security policies in place that require security testing of technology before implementation, but they are not enforced and are, therefore, moot.
All in all, as things stand now, there is little incentive for vendors of smart city technologies to think much about security.
But here’s an incentive for governments: weaknesses and vulnerabilities in smart systems monitoring and regulating traffic, transportation systems, water supply networks, emergency services, electrical power systems and various community services could ultimately lead to cascading failures that could harm many people both directly and indirectly.
Why should governments care?
“In smart cities everything depends on technology and most systems are interconnected,” Cerrudo explained to Help Net Security.
This interconnectedness increases functionality, but also the possibility of a vulnerability in one system causing problems in others: a chain reaction that ends up affecting critical systems and hurting people.
Interconnectedness, therefore, also increases the challenge of properly securing everything that has the potential to become the spark that ends up burning the entire forest (so to speak).
The big challenge for governments is to find and deploy easily scalable technologies that bring tangible benefits (better services, reduced costs), but also those that are less immediately obvious (security, privacy).
He says that governments should employ skilled people, define good security policies and enforce them, create a cybersecurity department/team to coordinate security initiatives and respond to all security-related incidents and – this can’t be stressed enough – don’t even think about acquiring technological solutions before auditing them for security features and issues.
Don’t implement anything without security testing
Particular attention should be given to how encryption, authentication and authorization features have been implemented.
“Encryption-related problems such as easily discoverable or guessable keys, weak keys, and hardcoded keys are very common and make encryption useless,” Cerrudo noted. Also, any encryption mechanism should be properly audited in a test implementation similar to a real-life scenario to avoid unpleasant port-implementation surprises.
He put forward LoRaWAN (a protocol for managing communication between LPWAN gateways and end-node devices) as an example of a technology that is wrongly assumed by many to be secure.
“LoRaWAN uses encryption for integrity and confidentiality, but because of implementation and key management issues the keys are everywhere, so LoRaWAN networks are not difficult to hack. The same happens with other technologies: implementation errors and issues ultimately make them insecure.”
Authentication issues such as hardcoded usernames and passwords, default passwords and (especially!) a total lack of authentication requirements should also be discovered before implementation gets under way. If authentication is weak, attackers can easily compromise the device and take full control.
“Authorization is often ignored by technology vendors, as many believe that if something is undocumented and hidden, it’s secure,” he says. But by reverse-engineering the solution’s firmware and software, an attacker can identify interfaces, learn custom protocols, functionalities, etc. and abuse them without needing any permission.
It’s important to audit every functionality. Any external interface should require proper authorization to access it. Also, insecure default configurations must be uncovered. Some technologies are completely open by default, exposing all functionality. This exposure should be reduced: minimum functionality by default, along with the option to enable the rest while making sure it’s secure, Cerrudo advised.
That said, there is no such thing as a totally secure and “unhackable” technology: there will always be vulnerabilities. Technology vendors should invest time and effort into discovering and fixing them, but they should also be prepared to react quickly when someone hacks their solutions.
Security
via https://www.aiupnow.com
Zeljka Zorz, Khareem Sudlow