Skip to content

訊連科技FaceMe®結合聯發科技全新智慧物聯網平台Genio 打造高效能人臉辨識AIoT應用

作為聯發科技AIoT合作夥伴生態系一員,訊連科技人臉辨識引擎FaceMe現可整合於聯發科技旗艦款AIoT單晶片Genio 1200中,為人臉辨識的AIoT應用提供理想選擇

【2022年7月21日,台北訊】AI人臉辨識領導廠商訊連科技(5203.TW)宣布旗下FaceMe®人臉辨識引擎,現整合於IC設計大廠聯發科技AIoT平台旗艦款晶片Genio 1200。彈性且精準的FaceMe人臉辨識與聯發科技高效能、低功耗Genio平台之組合,為人臉辨識AIoT應用提供更理想的選擇。

訊連科技跨平台人臉辨識引擎FaceMe以其99.73%的正確辨識率、百萬分之一誤認率名列全球最精準的人臉辨識引擎之一。FaceMe支援多元作業系統,並為聯發科技Genio等各式IoT及AIoT平台優化,為IoT/AIoT開發商及系統整合商提供優異的人臉辨識技術,使垂直產業部署更具彈性。

而聯發科技的全方位AIoT平台Genio提供包括超高性能及能效的晶片組、開放SDK平台、各式AI模型優化校正工具與技術支援。作為Genio系列的旗艦產品,Genio 1200專為高階AIoT產品及邊緣處理需求而設計;藉由CPU、GPU與APU(AI處理單元)的晶片組合,可大幅發揮AI功能,滿足FaceMe等AI人臉辨識引擎、多媒體效能與低功耗需求。目前FaceMe與Genio 1200的完美整合,已於Android裝置上取得優異的結果。

聯發科技智聯網事業部副總經理Richard Lu表示:「AI應用可帶動各樣創新,聯發科技的任務就是運用 Genio 智慧物聯網平台幫助企業推動最新、最強大的邊緣技術。我們十分期待看到訊連科技運用Genio 1200所支援的人臉辨識產品為使用者帶來更多嶄新的體驗。」

訊連科技董事長暨執行長黃肇雄表示,「隨著AIoT應用普及,市場勢必需要更高效、節能的解決方案。這次聯發科與訊連的合作,將Genio出色的AI效能與FaceMe快速、準確可靠的人臉辨識引擎相結合,充分滿足市場需求。」

About Version 2

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products. Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

關於CyberLink
訊連科技創立於1996年,擁有頂尖視訊與音訊技術的影音軟體公司,專精於數位影音軟體及多媒體串流應用解決方案產品研發,並以「抓準技術板塊,擴大全球行銷布局」的策略,深根台灣、佈局全球,展現亮麗的成績。訊連科技以先進的技術提供完美的高解析影音播放效果、以尖端的科技提供完整的高解析度擷取、編輯、製片及燒錄功能且完整支援各種高解析度影片及音訊格式。產品包括:「威力導演」、「PowerDVD」、「威力製片」、「威力酷燒」等。

Understanding Coordinated Inauthentic Behavior (CIB): What it is and How it Impacts the General Public

The term Coordinated Inauthentic Behavior (CIB) is used frequently in the news to describe the propagation of misinformation, misrepresentation, and other types of negative online influence operations. As seen in the news as of late, reports of CIB have recently led to the large-scale removal of accounts and pages on social media platforms. An example of CIB could be a political news site purportedly headquartered in America but operates from Macedonia or a Russian-created social media account. The said account might use a fictitious name as well as random images as a way to feign American perspectives blogging about US politics. 

It can take the following two forms:

  1. Coordinated inauthentic behavior (CIB) regarding domestic non-government campaigns
  2. Coordinated inauthentic behavior in the case of a foreign or government actor, termed as Foreign or Government Interference (FGI)

The objectives of both variants are the same. They are a part of larger coordinated campaigns that seek to influence public perspectives across social media platforms to further their agendas, both politically and socially.

What is Coordinated Inauthentic Behavior (CIB)?

Any domestic, non-government initiatives/campaigns that comprise groups of accounts and pages on the internet, especially social media, aiming to deceive people regarding who they are and what they do is often regarded as Coordinated Inauthentic Behavior (CIB). Whether they are accounts, pages, or groups, such behavior occurs when numerous bogus identities/personas collaborate to promote a specific idea/item or media subject with an ulterior  intent. It comprises influence operations aimed at manipulating public opinion for a strategic purpose. Their objective could be financial or political. For instance, during the Covid-19 outbreak, a network of web pages was active in spreading coronavirus misinformation.

What Impact Does CIB Have on the Regular Public?

Coordinated Inauthentic Behavior intends to manipulate public debate, push users towards political and social extremes, and inevitably lead to inter-community and inter-religious opinion clashes. The goal of CIB is to sway public opinion or coerce users with financial scams (if the objective is financial exploitation).

The potential for misinformation to impact international politics and public opinion is large, and has proven time and time again. CIB goes a step further, intentionally targeting and misleading individuals instead of merely propagating false news. A large problem with CIB lies in its ability to shift public opinion in such a short period of time, therefore making the removal of said account almost useless in the long term as its original goal has been accomplished. 

Identifying CIB on Facebook and Other Social Media Platforms

In recent years, the global increase of trolls and bots that manipulate public discussions on social media has caused significant challenges for political elections, natural disaster communication systems, and global health emergencies such as  the Covid-19 pandemic. However, progress has been made in using standard supervised learning to combat adversaries.

If you know where to look, coordinated inauthentic behavior by people and organizations on social media is simple to spot. Different indications on Facebook pages and groups, like those mentioned below, can help users better comprehend the data they’re viewing and the intentions of those behind it.

  1. The Section on ‘Page Transparency’

Every Facebook page features a “Page Transparency” feature that allows viewers to see countries from which the page admins upload information. The section is available on both mobile and desktop views. However, this option does not apply to Facebook groups.

  1. Posts with Multiple ‘Like and Share’ Requests Might Signal a Problem

It might indicate organized inauthentic conduct if a page is overloaded with photographs and memes urging users to like and share the content. According to Snopes’ study, though it does not always point towards questionable activity, an overload of this type of media is frequently associated  with inauthentic pages trying to gain more traction.

  1. ‘Blue Ticked’ Verified Pages

Blue badges appear next to the group or profile name on verified pages. Be it on Facebook, Twitter, or Instagram, the blue tick next to the user’s profile name represents an authenticated account. If you see one of these, it implies that the page or profile belongs to an authorized individual or organization. An unverified page, i.e., one without the blue badge representing your favorite celebrity asking for money for some social cause, is unlikely to be genuine. Being more cautious about what accounts are acting as certain organizations or people is an important part of staying safe online. 

  1. Check the ‘Page Creation Date’ 

Check the date the page/group/profile was created, especially for politically focused forums involving serious debates. For instance, it is a red flag if a page regarding some hot-button American political issue was created merely a week ago and shows that the real page managers are people from another nation. It takes time for outsiders to get involved in a country’s debate on a serious domestic issue. You can click the “Page Transparency” link on a page or the “About” tab in a group to determine the creation date.

  1. Examine the Administrators and Moderators of a Facebook Community

Since the Facebook groups (but not pages) disclose their administrators, moderators, and members, you may check the “Members” section on a group to check who is operating it and whether the admins appear to be authenticated individuals.

Examples/Case Studies of CIB

The following are a few well-known campaigns involving CIB that occurred in recent times. 

  1. #SaveTheChildren Campaign

The #SaveTheChildren campaign purposefully propagated the notion that a “cabal” of celebrities and political figures participated in satanic, ritual sexual assault of children worldwide.

In 2020, a conspiracy protest movement known as #SaveTheChildren surged throughout the United States, Canada, the United Kingdom, and Europe, sparking hundreds of in-person marches and protests. The #SaveTheChildren campaign’s claimed purpose was to raise awareness about the atrocities of “child sex trafficking.” 

The main inspiration behind the campaign was the QAnon conspiracy movement, which started in October 2017 by an anonymous 4chan website user later known as “Q.” They had claimed to be privy to top-secret government intel suggesting that Hillary Clinton was wanted by the Federal government and was about to be arrested, among other fraudulent theories.

  1. Ebola and the United States Border

Brian Kolfage, a Trump supporter and anti-immigrant activist, raised millions of dollars in internet donations to build a wall at the US-Mexico border. After two days, when the US government ordered the work to be stopped, he tweeted that an “insider” had notified him that there were nine migrants with “proven” Ebola cases at the Texas border due to which construction had been stopped. This assertion was false, but the Ebola hoax quickly spread across the country on social media and in right-wing organizations. 

He used disinformation to promote panic as a way to exploit the issue of immigration and gather support for his political aim of curbing immigration—a long-standing pledge of then-US President Donald Trump. 

  1. The Milk Tea Alliance 

It is an online multinational network of young people manipulating the media under the hashtag #MilkTeaAlliance. Youngsters from Thailand, Hong Kong, Taiwan, and Myanmar are among its supporters. The alliance uses the hashtag #MilkTeaAlliance to combat what they see as authoritarianism, either directed at the CCP (Chinese Communist Party) or their governments.

It surfaced in April 2020, following the commencement of an online campaign by pro-Chinese Communist Party (CCP) accounts to harass a Thai celebrity and his fans. A loosely organized group of young, largely Southeast Asian, pro-democracy netizens banded together, culminating in a meme war between the two sides on Twitter.

  1. The Antifa Fires Rumor

During the Oregon wildfires in September 2020, allegations circulated locally and globally that left-wing activists were to blame. The evidence alleging “anti-fa” involvement was based on a series of misinterpretations made by public authorities. The rumor was boosted by far-right political influencers, bogus Antifa Twitter accounts, and various anonymous trolling communities on the 4chan website.

  1. Hammer” and “Scorecard

The 2020 US presidential election was disturbed by unfounded accusations of widespread voting fraud, promoted by former President Donald Trump, whose allegations came to be known as “the big lie.” The idea that prompted this coordinated behavior is said to have included two aspects, Hammer and Scorecard, where an alleged government-run supercomputer called “Hammer,” and the system software, the “Scorecard” worked in tandem… The allegation was that the “Hammer and Scorecard” operation influenced real votes across the country in favor of President Joe Biden.

Final Words

With the ever increasing accessibility and widespread popularity of the internet and social media, influence operations and new deceptive behaviors will continue to emerge and spread despite pertinent regulations. Social media networks must continue to work to identify and stop Coordinated Inauthentic Behavior or CIB campaigns and any other kind of large-scale misinformation campaigns. However, as previously noted, users must also stay educated and cautious about the phenomenon. It will help them recognize CIB activity and take precautions to avoid falling into traps.

References

  1. Aziz, Z. (2020, November 2). What is Coordinated Inauthentic Behavior? Nisos. https://www.nisos.com/blog/what-is-coordinated-inauthentic-behavior/
  2. Meta. (2018, December 6). Coordinated inauthentic behavior

https://about.fb.com/news/tag/coordinated-inauthentic-behavior/

  1. Graham, T. (2020, May 29). Detecting and analyzing coordinated inauthentic behavior on social media. QUT Centre for Data Science. 

https://research.qut.edu.au/qutcds/events/detecting-and-analysing-coordinated-inauthentic-behaviour-on-social-media/

  1. Gleicher, N. (2018, December 6). Coordinated inauthentic behavior explained. Meta. https://about.fb.com/news/2018/12/inside-feed-coordinated-inauthentic-behavior/
  2. Johnson, S. (2021, December 21). How to spot ‘coordinated inauthentic behavior’ on Facebook, according to Snopes. Lifehacker. 

https://lifehacker.com/how-to-spot-coordinated-inauthentic-behavior-on-faceb-1848253059

  1. McGregor, S. (2020, September 17). What even is ‘coordinated inauthentic behavior’ on platforms? Wired

https://www.wired.com/story/what-even-is-coordinated-inauthentic-behavior-on-platforms/

#CIB #Facebook #vicarius_blog

About Version 2 Digital

Version 2 Digital is one of the most dynamic IT companies in Asia. The company distributes a wide range of IT products across various areas including cyber security, cloud, data protection, end points, infrastructures, system monitoring, storage, networking, business productivity and communication products.

Through an extensive network of channels, point of sales, resellers, and partnership companies, Version 2 offers quality products and services which are highly acclaimed in the market. Its customers cover a wide spectrum which include Global 1000 enterprises, regional listed companies, different vertical industries, public utilities, Government, a vast number of successful SMEs, and consumers in various Asian cities.

About VRX
VRX is a consolidated vulnerability management platform that protects assets in real time. Its rich, integrated features efficiently pinpoint and remediate the largest risks to your cyber infrastructure. Resolve the most pressing threats with efficient automation features and precise contextual analysis.

×

Hello!

Click one of our contacts below to chat on WhatsApp

×