Another attack vector just made an addition to 2017 Hack attacks using ultrasonic frequencies too high for humans to hear but able to silently control your personal voice assistants like OK Google,\u00a0Siri and Alexa. The attack can be carried out to perform number of operations Inaudible Commands Hacking Voice Assistants \tMake calls - Call to Pay Per Call number and scammers can earn a bit ? \tSend text messages and WhatsApp Audio Messages \tSet phone on airplane mode \tBrowse \tDownload malicious content \tManipulate navigation system in an Audi \tVisiting a malicious website. The device can open a malicious website, which can launch a drive-by-download attack or exploit a device with 0-day vulnerabilities \tSpying. An adversary can make the victim device initiate outgoing video\/phone calls, therefore getting access to the image\/sound of device surroundings. \tInjecting fake information. An adversary may instruct the victim device to send fake text messages and emails, to publish fake online posts, to add fake events to a calendar, etc. \tDenial of service. An adversary may inject commands to turn on the airplane mode, disconnecting all wireless communications \tConcealing attacks. The screen display and voice feedback may expose the attacks. The adversary may decrease the odds by dimming the screen and lowering the volume. \tLimit is your Imagination A team of researchers from China's Zheijiang University have published new research demonstrating how Siri, Alexa, and other voice-activated programs can be controlled using inaudible ultrasound commands. Dubbed DolphinAttack, it is really ingenious method though, but has a number of constraints and limitations making it unlikely to cause much disruption for now.However anyone with a bit of know-how can take advantage of this technique which is now public. DolphinAttack Demo Video https:\/\/youtu.be\/-g-7UORTOuk How DolphinAttack Hack Devices To demonstrate DolphinAttack, researchers first had to create a program that could translate normal voice into frequencies too high (above 20kHz) for humans to hear. Then the team simply played these commands through a smartphone equipped with an amplifier(as modulated signal generator) and an ultrasonic speaker. Since microphone can detect frequencies above 20kHz, these inaudible voice commands were perfectly perceptible for voice activated systems in target device and thereby successfully executed. DolphinAttack works on almost every major voice recognition system in any device including smartphones, iPad Macbooks and Amazon echo. Voice control systems which obeyed these commands include Siri, Alexa, Google Now, Cortana, Samsung S Voice and a number of in-car interfaces. Idea of Inaudible Commands is Not New! Although DolphinAttack is the most comprehensive test but the idea is a bit older. Some companies have already been exploiting this vulnerability for example Amazon's Dash Button and Google\u2019s Chromecast both use inaudible sounds to pair to smartphone. Marketers also track user's activity across the web through ultrasonic codes broadcasted in advertisements that work like cookies in a web browser. Prevention and Possible Fixes It's astounding just how user-friendliness have increasingly compromised our security making modern technology this much susceptible. Possible fixes: For Manufacturers: The team suggested device manufacturers to program their devices to ignore inaudible commands at 20 kHz or above.Microphones should be designed to resist acoustic signals with frequencies in ultrasonic range. For End Users: You can simply turn off the voice assistant apps by going into settings and the bad guys won't be able to talk to your phone.