電子發(fā)燒友App

硬聲App

0
  • 聊天消息
  • 系統(tǒng)消息
  • 評論與回復(fù)
登錄后你可以
  • 下載海量資料
  • 學(xué)習(xí)在線課程
  • 觀看技術(shù)視頻
  • 寫文章/發(fā)帖/加入社區(qū)
會員中心
創(chuàng)作中心

完善資料讓更多小伙伴認(rèn)識你,還能領(lǐng)取20積分哦,立即完善>

3天內(nèi)不再提示
創(chuàng)作
電子發(fā)燒友網(wǎng)>電子資料下載>電子資料>Walabot AI鎖舌開源分享

Walabot AI鎖舌開源分享

2022-10-21 | zip | 0.05 MB | 次下載 | 2積分

資料介紹

描述

什么是AI人臉識別鎖?

iPhone X Face ID 讓人們大吃一驚,人們開始通過 AI 深度學(xué)習(xí)意識到他們的臉比他們的指紋更加獨特和準(zhǔn)確。

但有些人沒有意識到 iPhone X 面部識別之所以起作用,是因為它只檢測到你或不檢測到你,因此它比使用 AI 檢測多個目標(biāo)的準(zhǔn)確度要高得多。

我們構(gòu)建了一個平臺,展示 AI 如何在英特爾 Movidius NCS 上運行,使用開發(fā)套件隨附的所有默認(rèn)攝像頭。這個項目可以擴展到使用面部識別來解鎖門栓、記錄條目、打開不同的燈光主題等等。

我們已經(jīng)在 caffe 上訓(xùn)練了整個網(wǎng)絡(luò),使用“me or not me”方法達(dá)到了 99% 以上的準(zhǔn)確率。額外的雷達(dá)(Walabot)將被添加到項目中,以確保簡單的圖像無法通過測試。

在本指南中,我們將使用卷積神經(jīng)網(wǎng)絡(luò)創(chuàng)建面部識別網(wǎng)絡(luò)并通過 Walabot 保護(hù)它以檢測距離和用戶呼吸,然后通過 alexa 打開鎖舌。

Alexa技能

我們的 Alexa 技能在技能 ID?

?
poYBAGNR45yAfYrCAAIE899oXBU813.png
Walabot Alexa 技能
?

您還可以按照系統(tǒng)指南中的用戶鏈接鏈接,通過帳戶鏈接設(shè)置多個用戶。

此處不需要帳戶關(guān)聯(lián)。 要在您自己的設(shè)備上使用公共Alexa 技能,您可以將 {YOUR_SERVER}??,因為連接用于測試服務(wù)器alexa技能。為了提高安全性,還可以按照指南使用您自己的服務(wù)器因為所有服務(wù)器代碼也是開源。

第 1 步:所需設(shè)備

  • Up2 Board 供電Intel x86(運行 Ubuntu 的設(shè)備)
  • Movidius 神經(jīng)計算棒
  • Walabot 創(chuàng)作者版
  • 任何 USB 相機

注意:嘗試在 Nvidia Jetson 上安裝 Walabot 軟件時出現(xiàn)問題,因為它不支持基于 arm64 的芯片。這個例子我們將使用 Up2 board + Movidius NCS。

?
pYYBAGNR46SARGQkABJebTP3njk383.jpg
AI人臉鎖所需的所有組件
?

第 2 步:拍攝臉部照片

沒有辦法繞過它,我們需要成千上萬張你自己的臉圖像。您可以嘗試通過 Google Photos 或 Facebook 獲取它們。但是另一種訓(xùn)練你的臉的簡單方法是簡單地使用你的電腦拍攝你不同情緒的視頻。

?

在拍攝了大約 5 分鐘的自拍電影后,您可以使用Total Video Converter等軟件將它們制作成圖像進(jìn)行訓(xùn)練。在此過程中,請將視頻制作為 640x480,以免占用太多空間。在本指南中,我使用了大約 3000 張自己的圖像,并進(jìn)行了近 1 分鐘的訓(xùn)練。

需要明確的是,我自己的 3000 張圖像和環(huán)境周圍隨機其他物體的 3000 張圖像供您測試。可能是其他人的臉,也可能是空白處。因此,本指南總共使用了 6,000 張圖像。

?
poYBAGNR46qARWnyAAOKrh2q_Z8751.png
為我而不是我的訓(xùn)練集數(shù)據(jù)
?

第三步:訓(xùn)練你的臉

現(xiàn)在你有了自己的圖像,。

我們使用的具體框架是 caffe,有很多方法可以訓(xùn)練模型,但我們可以使用一些具有正確參數(shù)的開源方法。對于這個項目,我利用了一個位于https://github.com/hqli/face_recognition的開源項目

因此,要使其正常工作,您將需要一個安裝了 GPUCPULinux 操作系統(tǒng)我們更喜歡使用專門為機器學(xué)習(xí)構(gòu)建的 AWS 或 Azure 機器。 英特爾 Devcloud還提供免費集群供您訓(xùn)練。

將圖像上傳到服務(wù)器,首先我們可以訓(xùn)練 3000 個自己的人臉圖像和 3000 個別人的人臉圖像。讓我們將文件夾放在 face_training 下,以便我們輕松理解它。

/home/ubuntu/face_training

使用代碼中附帶的 train_lmdb.py,您將能夠創(chuàng)建訓(xùn)練所需的 LMDB 圖像數(shù)據(jù)庫。

我們還可以通過以下命令獲取模型,以在 png 視圖中獲取 caffe 模型

python /opt/caffe/build/tools/draw_net.py /home/ubuntu/face_training/deepID_solver.prototxt /home/ubuntu/face_training/caffe_model_face.png
?
pYYBAGNR462AOe1RAAENUV38-_8672.png
咖啡模型
?

之后我們需要計算平均圖像(請使用您自己的 caffe 文件夾來啟動 caffe)。這用于從每個輸入圖像中減去平均圖像,以確保每個特征像素的均值為零。

/opt/caffe/build/tools/compute_image_mean -backend=lmdb /home/ubuntu/face_training/input/train_lmdb /home/ubuntu/face_training/input/mean.binaryproto

我們可以更改 face_recognition 的求解器和 deepID_train_test_2.prototxt,可以看到源代碼文件并運行以下命令。

/opt/caffe/build/tools/caffe train --solver /home/ubuntu/face_training/deepID_solver.prototxt 2>&1 | tee /home/ubuntu/face_training/deepID_model_train.log

經(jīng)過 2000 次迭代后,您應(yīng)該擁有一個可用于訓(xùn)練的 AI 快照模型。

運行以下命令以獲取訓(xùn)練曲線

python plot_learning_curve.py ~/caffe_model_face/model_face_train.log ~/caffe_model_face/caffe_model_face_learning_curve.png 
?
poYBAGNR46-AYyQjAACItp9hd6o912.png
面部訓(xùn)練曲線
?

第 4 步:使用 Movidius NCS SDK 設(shè)置 Up2 Board

Up2 板已經(jīng)安裝在 Ubuntu 上。但萬一您想要全新安裝,您可以按照以下說明進(jìn)行操作

設(shè)置好 Up2 板后,我們可以登錄到 Ubuntu 并從以下位置安裝 movidius

?
pYYBAGNR47KAPlOtAAMiTqwaRp0811.png
Movidius 安裝后
?

這一步需要我們?yōu)?Up2 板做好準(zhǔn)備,從安裝 NCS SDK 開始,我們已經(jīng)安裝了 caffe 并準(zhǔn)備好運行。我們需要以下文件

deepID_deploy.prototxt 來自源代碼(最初來自 https://github.com/hqli/face_recognition),更改 num_output: 2 或您正在使用的任何面部數(shù)量。

categories.txt創(chuàng)建一個文件,使第一行未知,第二行您。

轉(zhuǎn)到 FaceNet 文件夾并從 bin 文件夾運行 mvNCCompile.pyc

python3 ../../../bin/mvNCCompile.pyc deepID_deploy.prototxt -w snapshot_iter_300.caffemodel

這將為您生成您需要的圖形文件,只需復(fù)制 inputsize.txt 和 stat.txt,我們就可以試一試,我們的程序?qū)⑦\行

python ncs_face.py 
?
poYBAGNR47SAIgvjAAEDKMG2OB8734.png
Movidius 在識別的人臉與未知人臉之間運行的屏幕截圖
?

第 5 步:設(shè)置 Walabot 進(jìn)行人員檢測

目前最大的問題之一是人工智能可以在 2 維空間中識別你的臉,但在 3 維空間中它不知道你的臉。Walabot 在這里發(fā)揮著至關(guān)重要的作用,以確保有人不只是展示您的照片并解鎖門閂。

?Walabot API,以便將其導(dǎo)入 python 項目。網(wǎng)站上安裝 Walabot API 的部分存在錯誤https://api.walabot.com/_pythonapi.html#_installingwalabotapi它指出

python -m pip “/usr/share/walabot/python/WalabotAPI-1.0.21.tar.gz”

那應(yīng)該是

python3 -m pip install "/usr/share/walabot/python/WalabotAPI-1.0.21.zip"

通過 USB 2 連接 Walabot Maker,我無法讓 usb3 工作,但 usb2 可以正常連接到 linux。由于 Joule 只有一個 USB3 端口,因此在此處連接一個額外的 USB2 端口以容納 Walabot Maker。

?
poYBAGNR472AO392ABEL94ylrHA236.jpg
與一切相連
?

通過在文件夾中運行以下命令來測試 Walabot 項目,例如https://github.com/Walabot-Projects/Walabot-SensorTargets

python SensorTargets.py

這應(yīng)該給你一個很好的測試,看看 Walabot 是否正確運行,以及如何測量你想要的東西的距離。

?
pYYBAGNR47-AS_s1AACzVjDgplY70.jpeg
瓦拉博特測試
?

DistanceMeasure 示例在測量上并不太一致,zPosCm 似乎非常準(zhǔn)確,因此我決定使用 zPosCM 進(jìn)行演示。因為假面部不會通過相同的 zPosCM,而且在它之上,我們還可以檢測呼吸,以確保那里有人。但是對于這個例子,我們只是要嘗試 zPosCM。在這種情況下,將修改 ncs_thread_model.py 以便我們可以使用 Walabot Radar 信息以及 ncs_thread AI 信息。

?
poYBAGNR48KAT_A2AADy0hQA560176.png
現(xiàn)在我們可以檢測距離以及使用人工智能檢測人臉
?

步驟 5B 可選:使用 Walabot 雷達(dá)添加呼吸檢測

或者,我們可以使用 Walabot 來檢測人是否在呼吸,這樣我們可以為用戶增加額外的安全性。

當(dāng)一個人呼吸時,我們會通過 Walabot Radar 檢測到上下波動的能量峰值,如下圖所示。特別是在雷達(dá)的近距離內(nèi)。

?
poYBAGNR48SAdOfhAAB-0KI6Grk731.png
正常呼吸模式
?

當(dāng)物體假裝在雷達(dá)前面時,它會顯示在能級上。下圖是通過將計算機屏幕直接放在雷達(dá)前面來完成的。

?
pYYBAGNR48aASKIXAAA8yk6DGTs614.png
當(dāng)我們只在雷達(dá)前面放一張圖片時
?

代碼作為“Walabot Breath Detection”附加,您可以使用以下代碼。首先,我們可以通過檢查數(shù)據(jù)的上下波動而不是保持平穩(wěn)來檢測人是否在呼吸。

#!/usr/bin/env python3 
from __future__ import print_function # WalabotAPI works on both Python 2 an 3. 
from sys import platform 
from os import system 
from imp import load_source 
from os.path import join 
import time, random 
import math 
from collections import deque 
import urllib.request 
modulePath = join('/usr', 'share', 'walabot', 'python', 'WalabotAPI.py')      
wlbt = load_source('WalabotAPI', modulePath) 
wlbt.Init() 
start = time.time() 
class RealtimePlot: 
  def __init__(self, axes, max_entries =100): 
      self.axis_x = deque(maxlen=max_entries) 
      self.axis_y = deque(maxlen=max_entries) 
      self.axes = axes 
      self.max_entries = max_entries 
      self.lineplot, = axes.plot([], [], "ro-") 
      self.axes.set_autoscaley_on(True) 
  def add(self, x, y): 
      self.axis_x.append(x) 
      self.axis_y.append(y) 
      self.lineplot.set_data(self.axis_x, self.axis_y) 
      self.axes.set_xlim(self.axis_x[0], self.axis_x[-1] + 1e-15) 
      self.axes.set_ylim(0, 0.2) 
      self.axes.relim(); self.axes.autoscale_view() # rescale the y-axis 
  def animate(self, figure, callback, interval = 50): 
      import matplotlib.animation as animation 
      def wrapper(frame_index): 
          self.add(*callback(frame_index)) 
          self.axes.relim(); self.axes.autoscale_view() # rescale the y-axis 
          return self.lineplot 
      animation.FuncAnimation(figure, wrapper, interval=interval) 
def main(): 
  from matplotlib import pyplot as plt 
  # Walabot_SetArenaR - input parameters 
  minInCm, maxInCm, resInCm = 30, 150, 1 
  # Walabot_SetArenaTheta - input parameters 
  minIndegrees, maxIndegrees, resIndegrees = -4, 4, 2 
  # Walabot_SetArenaPhi - input parameters 
  minPhiInDegrees, maxPhiInDegrees, resPhiInDegrees = -4, 4, 2 
  # Configure Walabot database install location (for windows) 
  wlbt.SetSettingsFolder() 
  # 1) Connect : Establish communication with walabot. 
  wlbt.ConnectAny() 
  # 2) Configure: Set scan profile and arena 
  # Set Profile - to Sensor-Narrow. 
  wlbt.SetProfile(wlbt.PROF_SENSOR_NARROW) 
  # Setup arena - specify it by Cartesian coordinates. 
  wlbt.SetArenaR(minInCm, maxInCm, resInCm) 
  # Sets polar range and resolution of arena (parameters in degrees). 
  wlbt.SetArenaTheta(minIndegrees, maxIndegrees, resIndegrees) 
  # Sets azimuth range and resolution of arena.(parameters in degrees). 
  wlbt.SetArenaPhi(minPhiInDegrees, maxPhiInDegrees, resPhiInDegrees) 
  # Dynamic-imaging filter for the specific frequencies typical of breathing 
  wlbt.SetDynamicImageFilter(wlbt.FILTER_TYPE_DERIVATIVE) 
  # 3) Start: Start the system in preparation for scanning. 
  wlbt.Start() 
  fig, axes = plt.subplots() 
  display = RealtimePlot(axes) 
  display.animate(fig, lambda frame_index: (time.time() - start, random.random() * 100)) 
  #plt.show() 
  #fig, axes = plt.subplots() 
  #display = RealtimePlot(axes) 
  while True: 
      appStatus, calibrationProcess = wlbt.GetStatus() 
      # 5) Trigger: Scan(sense) according to profile and record signals 
      # to be available for processing and retrieval. 
      wlbt.Trigger() 
      # 6) Get action: retrieve the last completed triggered recording 
      energy = wlbt.GetImageEnergy() 
      display.add(time.time() - start, energy * 100) 
      #This is just for prototype purposes, we will gather the data in bulk and send them to the server in the future 
      plt.pause(0.001) 
if __name__ == "__main__": main()  

這是機器學(xué)習(xí)和深度學(xué)習(xí)之間的區(qū)別,在機器學(xué)習(xí)中,我們可以在這里編寫一個算法來確定什么是呼吸,什么不是,隨著時間的推移改進(jìn)算法。我們也可以使用深度學(xué)習(xí)神經(jīng)網(wǎng)絡(luò),按照步驟 1 到 3,使用神經(jīng)計算棒讓 AI 判斷哪個在呼吸,哪個沒有。

步驟 5C 可選:為 Walabot 添加能量

如前所述,我們可以在原始圖像上運行深度學(xué)習(xí)算法來檢測能量,也可以按照步驟 1 到 3 進(jìn)行。

?
poYBAGNR48mAcduOAALxdyuATbM637.png
Walabot 原始圖像的深度學(xué)習(xí)
?

您可以使用以下代碼獲取原始圖像,然后使用 NCS 對圖像本身進(jìn)行分類。這將需要第二個 Movidius NCS,因為第一個 NCS 正在運行面部分類。

在這種情況下需要的特定代碼是,這允許將 jpg 流式傳輸?shù)?raw.jpg

   def update(self, rawImage, lenOfPhi, lenOfR):
       """ Updates the canvas cells colors acorrding to a given rawImage
           matrix and it's dimensions.
           Arguments:
               rawImage    A 2D matrix contains the current rawImage slice.
               lenOfPhi    Number of cells in Phi axis.
               lenOfR      Number of cells in R axis.
       """
       for i in range(lenOfPhi):
           for j in range(lenOfR):
               self.canvas.itemconfigure(
                   self.cells[lenOfPhi-i-1][j],
                   fill='#'+COLORS[rawImage[i][j]])
       ps = self.canvas.postscripot(colormode = 'color')
       im = Image.open(io.Bytes.IO(ps.encode('utf-8)))
       im.save('raw.jpg')
?
?
?
?
poYBAGNR48yAen-hAACG1OXc5uM296.jpg
?
1 / 2 ?這是原始圖像保存時的樣子
?

RawImage 的完整代碼可以在

from __future__ import print_function, division
import WalabotAPI as wlbt
import io
from PIL import Image
try:  # for Python 2
   import Tkinter as tk
except ImportError:  # for Python 3
   import tkinter as tk
try:  # for Python 2
   range = xrange
except NameError:
   pass
COLORS = [
   "000083", "000087", "00008B", "00008F", "000093", "000097", "00009B",
   "00009F", "0000A3", "0000A7", "0000AB", "0000AF", "0000B3", "0000B7",
   "0000BB", "0000BF", "0000C3", "0000C7", "0000CB", "0000CF", "0000D3",
   "0000D7", "0000DB", "0000DF", "0000E3", "0000E7", "0000EB", "0000EF",
   "0000F3", "0000F7", "0000FB", "0000FF", "0003FF", "0007FF", "000BFF",
   "000FFF", "0013FF", "0017FF", "001BFF", "001FFF", "0023FF", "0027FF",
   "002BFF", "002FFF", "0033FF", "0037FF", "003BFF", "003FFF", "0043FF",
   "0047FF", "004BFF", "004FFF", "0053FF", "0057FF", "005BFF", "005FFF",
   "0063FF", "0067FF", "006BFF", "006FFF", "0073FF", "0077FF", "007BFF",
   "007FFF", "0083FF", "0087FF", "008BFF", "008FFF", "0093FF", "0097FF",
   "009BFF", "009FFF", "00A3FF", "00A7FF", "00ABFF", "00AFFF", "00B3FF",
   "00B7FF", "00BBFF", "00BFFF", "00C3FF", "00C7FF", "00CBFF", "00CFFF",
   "00D3FF", "00D7FF", "00DBFF", "00DFFF", "00E3FF", "00E7FF", "00EBFF",
   "00EFFF", "00F3FF", "00F7FF", "00FBFF", "00FFFF", "03FFFB", "07FFF7",
   "0BFFF3", "0FFFEF", "13FFEB", "17FFE7", "1BFFE3", "1FFFDF", "23FFDB",
   "27FFD7", "2BFFD3", "2FFFCF", "33FFCB", "37FFC7", "3BFFC3", "3FFFBF",
   "43FFBB", "47FFB7", "4BFFB3", "4FFFAF", "53FFAB", "57FFA7", "5BFFA3",
   "5FFF9F", "63FF9B", "67FF97", "6BFF93", "6FFF8F", "73FF8B", "77FF87",
   "7BFF83", "7FFF7F", "83FF7B", "87FF77", "8BFF73", "8FFF6F", "93FF6B",
   "97FF67", "9BFF63", "9FFF5F", "A3FF5B", "A7FF57", "ABFF53", "AFFF4F",
   "B3FF4B", "B7FF47", "BBFF43", "BFFF3F", "C3FF3B", "C7FF37", "CBFF33",
   "CFFF2F", "D3FF2B", "D7FF27", "DBFF23", "DFFF1F", "E3FF1B", "E7FF17",
   "EBFF13", "EFFF0F", "F3FF0B", "F7FF07", "FBFF03", "FFFF00", "FFFB00",
   "FFF700", "FFF300", "FFEF00", "FFEB00", "FFE700", "FFE300", "FFDF00",
   "FFDB00", "FFD700", "FFD300", "FFCF00", "FFCB00", "FFC700", "FFC300",
   "FFBF00", "FFBB00", "FFB700", "FFB300", "FFAF00", "FFAB00", "FFA700",
   "FFA300", "FF9F00", "FF9B00", "FF9700", "FF9300", "FF8F00", "FF8B00",
   "FF8700", "FF8300", "FF7F00", "FF7B00", "FF7700", "FF7300", "FF6F00",
   "FF6B00", "FF6700", "FF6300", "FF5F00", "FF5B00", "FF5700", "FF5300",
   "FF4F00", "FF4B00", "FF4700", "FF4300", "FF3F00", "FF3B00", "FF3700",
   "FF3300", "FF2F00", "FF2B00", "FF2700", "FF2300", "FF1F00", "FF1B00",
   "FF1700", "FF1300", "FF0F00", "FF0B00", "FF0700", "FF0300", "FF0000",
   "FB0000", "F70000", "F30000", "EF0000", "EB0000", "E70000", "E30000",
   "DF0000", "DB0000", "D70000", "D30000", "CF0000", "CB0000", "C70000",
   "C30000", "BF0000", "BB0000", "B70000", "B30000", "AF0000", "AB0000",
   "A70000", "A30000", "9F0000", "9B0000", "970000", "930000", "8F0000",
   "8B0000", "870000", "830000", "7F0000"]
APP_X, APP_Y = 50, 50  # location of top-left corner of window
CANVAS_LENGTH = 650  # in pixels
class RawImageApp(tk.Frame):
   """ Main app class.
   """
   def __init__(self, master):
       """ Init the GUI components and the Walabot API.
       """
       tk.Frame.__init__(self, master)
       self.canvasPanel = CanvasPanel(self)
       self.wlbtPanel = WalabotPanel(self)
       self.ctrlPanel = ControlPanel(self)
       self.canvasPanel.pack(side=tk.RIGHT, anchor=tk.NE)
       self.wlbtPanel.pack(side=tk.TOP, anchor=tk.W, fill=tk.BOTH, pady=10)
       self.ctrlPanel.pack(side=tk.TOP, anchor=tk.W, fill=tk.BOTH, pady=10)
       self.wlbt = Walabot()
   def initAppLoop(self):
       if self.wlbt.isConnected():
           self.ctrlPanel.statusVar.set('STATUS_CONNECTED')
           self.update_idletasks()
           params = self.wlbtPanel.getParams()
           self.wlbt.setParams(*params)
           self.wlbtPanel.setParams(*self.wlbt.getArenaParams())
           if not params[4]:  # equals: if not mtiMode
               self.ctrlPanel.statusVar.set('STATUS_CALIBRATING')
               self.update_idletasks()
               self.wlbt.calibrate()
           self.lenOfPhi, self.lenOfR = self.wlbt.getRawImageSliceDimensions()
           self.canvasPanel.setGrid(self.lenOfPhi, self.lenOfR)
           self.wlbtPanel.changeEntriesState('disabled')
           self.loop()
       else:
           self.ctrlPanel.statusVar.set('STATUS_DISCONNECTED')
   def loop(self):
       self.ctrlPanel.statusVar.set('STATUS_SCANNING')
       rawImage = self.wlbt.triggerAndGetRawImageSlice()
       self.canvasPanel.update(rawImage, self.lenOfPhi, self.lenOfR)
       self.ctrlPanel.fpsVar.set(self.wlbt.getFps())
       self.cyclesId = self.after_idle(self.loop)
class WalabotPanel(tk.LabelFrame):
   class WalabotParameter(tk.Frame):
       """ The frame that sets each Walabot parameter line.
       """
       def __init__(self, master, varVal, minVal, maxVal, defaultVal):
           """ Init the Labels (parameter name, min/max value) and entry.
           """
           tk.Frame.__init__(self, master)
           tk.Label(self, text=varVal).pack(side=tk.LEFT, padx=(0, 5), pady=1)
           self.minVal, self.maxVal = minVal, maxVal
           self.var = tk.StringVar()
           self.var.set(defaultVal)
           self.entry = tk.Entry(self, width=7, textvariable=self.var)
           self.entry.pack(side=tk.LEFT)
           self.var.trace("w", lambda a, b, c, var=self.var: self.validate())
           txt = "[{}, {}]".format(minVal, maxVal)
           tk.Label(self, text=txt).pack(side=tk.LEFT, padx=(5, 20), pady=1)
       def validate(self):
           """ Checks that the entered value is a valid number and between
               the min/max values. Change the font color of the value to red
               if False, else to black (normal).
           """
           num = self.var.get()
           try:
               num = float(num)
               if num < self.minVal or num > self.maxVal:
                   self.entry.config(fg='#'+COLORS[235])
                   return
               self.entry.config(fg='gray1')
           except ValueError:
               self.entry.config(fg='#'+COLORS[235])
               return
       def get(self):
           """ Returns the entry value as a float.
           """
           return float(self.var.get())
       def set(self, value):
           """ Sets the entry value according to a given one.
           """
           self.var.set(value)
       def changeState(self, state):
           """ Change the entry state according to a given one.
           """
           self.entry.configure(state=state)
   class WalabotParameterMTI(tk.Frame):
       """ The frame that control the Walabot MTI parameter line.
       """
       def __init__(self, master):
           """ Init the MTI line (label, radiobuttons).
           """
           tk.Frame.__init__(self, master)
           tk.Label(self, text="MTI      ").pack(side=tk.LEFT)
           self.mtiVar = tk.IntVar()
           self.mtiVar.set(0)
           self.true = tk.Radiobutton(
               self, text="True", variable=self.mtiVar, value=2)
           self.false = tk.Radiobutton(
               self, text="False", variable=self.mtiVar, value=0)
           self.true.pack(side=tk.LEFT)
           self.false.pack(side=tk.LEFT)
       def get(self):
           """ Returns the value of the pressed radiobutton.
           """
           return self.mtiVar.get()
       def set(self, value):
           """ Sets the pressed radiobutton according to a given value.
           """
           self.mtiVar.set(value)
       def changeState(self, state):
           """ Change the state of the radiobuttons according to a given one.
           """
           self.true.configure(state=state)
           self.false.configure(state=state)
   def __init__(self, master):
       tk.LabelFrame.__init__(self, master, text='Walabot Configuration')
       self.rMin = self.WalabotParameter(self, 'R     Min', 1, 1000, 10.0)
       self.rMax = self.WalabotParameter(self, 'R     Max', 1, 1000, 100.0)
       self.rRes = self.WalabotParameter(self, 'R     Res', 0.1, 10, 2.0)
       self.tMin = self.WalabotParameter(self, 'Theta Min', -90, 90, -20.0)
       self.tMax = self.WalabotParameter(self, 'Theta Max', -90, 90, 20.0)
       self.tRes = self.WalabotParameter(self, 'Theta Res', 0.1, 10, 10.0)
       self.pMin = self.WalabotParameter(self, 'Phi   Min', -90, 90, -45.0)
       self.pMax = self.WalabotParameter(self, 'Phi   Max', -90, 90, 45.0)
       self.pRes = self.WalabotParameter(self, 'Phi   Res', 0.1, 10, 2.0)
       self.thld = self.WalabotParameter(self, 'Threshold', 0.1, 100, 15.0)
       self.mti = self.WalabotParameterMTI(self)
       self.parameters = (
           self.rMin, self.rMax, self.rRes, self.tMin, self.tMax, self.tRes,
           self.pMin, self.pMax, self.pRes, self.thld, self.mti)
       for param in self.parameters:
           param.pack(anchor=tk.W)
   def getParams(self):
       rParams = (self.rMin.get(), self.rMax.get(), self.rRes.get())
       tParams = (self.tMin.get(), self.tMax.get(), self.tRes.get())
       pParams = (self.pMin.get(), self.pMax.get(), self.pRes.get())
       thldParam, mtiParam = self.thld.get(), self.mti.get()
       return rParams, tParams, pParams, thldParam, mtiParam
   def setParams(self, rParams, thetaParams, phiParams, threshold):
       self.rMin.set(rParams[0])
       self.rMax.set(rParams[1])
       self.rRes.set(rParams[2])
       self.tMin.set(thetaParams[0])
       self.tMax.set(thetaParams[1])
       self.tRes.set(thetaParams[2])
       self.pMin.set(phiParams[0])
       self.pMax.set(phiParams[1])
       self.pRes.set(phiParams[2])
       self.thld.set(threshold)
   def changeEntriesState(self, state):
       for param in self.parameters:
           param.changeState(state)
class ControlPanel(tk.LabelFrame):
   """ This class is designed to control the control area of the app.
   """
   def __init__(self, master):
       """ Initialize the buttons and the data labels.
       """
       tk.LabelFrame.__init__(self, master, text='Control Panel')
       self.buttonsFrame = tk.Frame(self)
       self.runButton, self.stopButton = self.setButtons(self.buttonsFrame)
       self.statusFrame = tk.Frame(self)
       self.statusVar = self.setVar(self.statusFrame, 'APP_STATUS', '')
       self.errorFrame = tk.Frame(self)
       self.errorVar = self.setVar(self.errorFrame, 'EXCEPTION', '')
       self.fpsFrame = tk.Frame(self)
       self.fpsVar = self.setVar(self.fpsFrame, 'FRAME_RATE', 'N/A')
       self.buttonsFrame.grid(row=0, column=0, sticky=tk.W)
       self.statusFrame.grid(row=1, columnspan=2, sticky=tk.W)
       self.errorFrame.grid(row=2, columnspan=2, sticky=tk.W)
       self.fpsFrame.grid(row=3, columnspan=2, sticky=tk.W)
   def setButtons(self, frame):
       """ Initialize the 'Start' and 'Stop' buttons.
       """
       runButton = tk.Button(frame, text='Start', command=self.start)
       stopButton = tk.Button(frame, text='Stop', command=self.stop)
       runButton.grid(row=0, column=0)
       stopButton.grid(row=0, column=1)
       return runButton, stopButton
   def setVar(self, frame, varText, default):
       """ Initialize the data frames.
       """
       strVar = tk.StringVar()
       strVar.set(default)
       tk.Label(frame, text=(varText).ljust(12)).grid(row=0, column=0)
       tk.Label(frame, textvariable=strVar).grid(row=0, column=1)
       return strVar
   def start(self):
       """ Applied when 'Start' button is pressed. Starts the Walabot and
           the app cycles.
       """
       self.master.initAppLoop()
   def stop(self):
       """ Applied when 'Stop' button in pressed. Stops the Walabot and the
           app cycles.
       """
       if hasattr(self.master, 'cyclesId'):
           self.master.after_cancel(self.master.cyclesId)
           self.master.wlbtPanel.changeEntriesState('normal')
           self.master.canvasPanel.reset()
           self.statusVar.set('STATUS_IDLE')
class CanvasPanel(tk.LabelFrame):
   """ This class is designed to control the canvas area of the app.
   """
   def __init__(self, master):
       """ Initialize the label-frame and canvas.
       """
       tk.LabelFrame.__init__(self, master, text='Raw Image Slice: R / Phi')
       self.canvas = tk.Canvas(
           self, width=CANVAS_LENGTH, height=CANVAS_LENGTH)
       self.canvas.pack()
       self.canvas.configure(background='#'+COLORS[0])
   def setGrid(self, sizeX, sizeY):
       """ Set the canvas components (rectangles), given the size of the axes.
           Arguments:
               sizeX       Number of cells in Phi axis.
               sizeY       Number of cells in R axis.
       """
       recHeight, recWidth = CANVAS_LENGTH/sizeX, CANVAS_LENGTH/sizeY
       self.cells = [[
           self.canvas.create_rectangle(
               recWidth*col, recHeight*row,
               recWidth*(col+1), recHeight*(row+1),
               width=0)
           for col in range(sizeY)] for row in range(sizeX)]
   def update(self, rawImage, lenOfPhi, lenOfR):
       """ Updates the canvas cells colors acorrding to a given rawImage
           matrix and it's dimensions.
           Arguments:
               rawImage    A 2D matrix contains the current rawImage slice.
               lenOfPhi    Number of cells in Phi axis.
               lenOfR      Number of cells in R axis.
       """
       for i in range(lenOfPhi):
           for j in range(lenOfR):
               self.canvas.itemconfigure(
                   self.cells[lenOfPhi-i-1][j],
                   fill='#'+COLORS[rawImage[i][j]])
       ps = self.canvas.postscripot(colormode = 'color')
       im = Image.open(io.Bytes.IO(ps.encode('utf-8)))
       im.save('raw.jpg')
   def reset(self):
       """ Deletes all the canvas components (colored rectangles).
       """
       self.canvas.delete('all')
class Walabot:
   """ Control the Walabot using the Walabot API.
   """
   def __init__(self):
       """ Init the Walabot API.
       """
       self.wlbt = wlbt
       self.wlbt.Init()
       self.wlbt.SetSettingsFolder()
   def isConnected(self):
       """ Try to connect the Walabot device. Return True/False accordingly.
       """
       try:
           self.wlbt.ConnectAny()
       except self.wlbt.WalabotError as err:
           if err.code == 19:  # "WALABOT_INSTRUMENT_NOT_FOUND"
               return False
           else:
               raise err
       return True
   def setParams(self, r, theta, phi, threshold, mti):
       """ Set the arena Parameters according given ones.
       """
       self.wlbt.SetProfile(self.wlbt.PROF_SENSOR)
       self.wlbt.SetArenaR(*r)
       self.wlbt.SetArenaTheta(*theta)
       self.wlbt.SetArenaPhi(*phi)
       self.wlbt.SetThreshold(threshold)
       self.wlbt.SetDynamicImageFilter(mti)
       self.wlbt.Start()
   def getArenaParams(self):
       """ Returns the Walabot parameters from the Walabot SDK.
           Returns:
               params      rParams, thetaParams, phiParams, threshold as
                           given from the Walabot SDK.
       """
       rParams = self.wlbt.GetArenaR()
       thetaParams = self.wlbt.GetArenaTheta()
       phiParams = self.wlbt.GetArenaPhi()
       threshold = self.wlbt.GetThreshold()
       return rParams, thetaParams, phiParams, threshold
   def calibrate(self):
       """ Calibrates the Walabot.
       """
       self.wlbt.StartCalibration()
       while self.wlbt.GetStatus()[0] == self.wlbt.STATUS_CALIBRATING:
           self.wlbt.Trigger()
   def getRawImageSliceDimensions(self):
       """ Returns the dimensions of the rawImage 2D list given from the
           Walabot SDK.
           Returns:
               lenOfPhi    Num of cells in Phi axis.
               lenOfR      Num of cells in Theta axis.
       """
       return self.wlbt.GetRawImageSlice()[1:3]
   def triggerAndGetRawImageSlice(self):
       """ Returns the rawImage given from the Walabot SDK.
           Returns:
               rawImage    A rawImage list as described in the Walabot docs.
       """
       self.wlbt.Trigger()
       return self.wlbt.GetRawImageSlice()[0]
   def getFps(self):
       """ Returns the Walabot current fps as given from the Walabot SDK.
           Returns:
               fpsVar      Number of frames per seconds.
       """
       return int(self.wlbt.GetAdvancedParameter('FrameRate'))
def rawImage():
   """ Main app function. Init the main app class, configure the window
       and start the mainloop.
   """
   root = tk.Tk()
   root.title('Walabot - Raw Image Slice Example')
   RawImageApp(root).pack(side=tk.TOP, fill=tk.BOTH, expand=True)
   root.geometry("+{}+{}".format(APP_X, APP_Y))  # set window location
   root.update()
   root.minsize(width=root.winfo_reqwidth(), height=root.winfo_reqheight())
   root.mainloop()
if __name__ == '__main__':
   rawImage()

建立 caffe 模型后,您可以使用以下代碼獲取原始圖像并保存到 raw.jpg。之后使用以下代碼運行 NCS 對圖像進(jìn)行分類

import os
import sys
import numpy
import ntpath
import argparse
import skimage.io
import skimage.transform
import mvnc.mvncapi as mvnc
# Number of top prodictions to print
NUM_PREDICTIONS		= 5
# Variable to store commandline arguments
ARGS                = None
# ---- Step 1: Open the enumerated device and get a handle to it -------------
def open_ncs_device():
   # Look for enumerated NCS device(s); quit program if none found.
   devices = mvnc.EnumerateDevices()
   if len( devices ) == 0:
       print( "No devices found" )
       quit()
   # Get a handle to the first enumerated device and open it
   device = mvnc.Device( devices[0] )
   device.OpenDevice()
   return device
# ---- Step 2: Load a graph file onto the NCS device -------------------------
def load_graph( device ):
   # Read the graph file into a buffer
   with open( ARGS.graph, mode='rb' ) as f:
       blob = f.read()
   # Load the graph buffer into the NCS
   graph = device.AllocateGraph( blob )
   return graph
# ---- Step 3: Pre-process the images ----------------------------------------
def pre_process_image():
   # Read & resize image [Image size is defined during training]
   img = skimage.io.imread( ARGS.image )
   img = skimage.transform.resize( img, ARGS.dim, preserve_range=True )
   # Convert RGB to BGR [skimage reads image in RGB, but Caffe uses BGR]
   if( ARGS.colormode == "BGR" ):
       img = img[:, :, ::-1]
   # Mean subtraction & scaling [A common technique used to center the data]
   img = img.astype( numpy.float16 )
   img = ( img - numpy.float16( ARGS.mean ) ) * ARGS.scale
   return img
# ---- Step 4: Read & print inference results from the NCS -------------------
def infer_image( graph, img ):
   # Load the labels file 
   labels =[ line.rstrip('\n') for line in 
                  open( ARGS.labels ) if line != 'classes\n'] 
   # The first inference takes an additional ~20ms due to memory 
   # initializations, so we make a 'dummy forward pass'.
   graph.LoadTensor( img, 'user object' )
   output, userobj = graph.GetResult()
   # Load the image as a half-precision floating point array
   graph.LoadTensor( img, 'user object' )
   # Get the results from NCS
   output, userobj = graph.GetResult()
   # Sort the indices of top predictions
   order = output.argsort()[::-1][:NUM_PREDICTIONS]
   # Get execution time
   inference_time = graph.GetGraphOption( mvnc.GraphOption.TIME_TAKEN )
   # Print the results
   print( "\n==============================================================" )
   print( "Top predictions for", ntpath.basename( ARGS.image ) )
   print( "Execution time: " + str( numpy.sum( inference_time ) ) + "ms" )
   print( "--------------------------------------------------------------" )
   for i in range( 0, NUM_PREDICTIONS ):
       print( "%3.1f%%\t" % (100.0 * output[ order[i] ] )
              + labels[ order[i] ] )
   print( "==============================================================" )
   # If a display is available, show the image on which inference was performed
   if 'DISPLAY' in os.environ:
       skimage.io.imshow( ARGS.image )
       skimage.io.show()
# ---- Step 5: Unload the graph and close the device -------------------------
def close_ncs_device( device, graph ):
   graph.DeallocateGraph()
   device.CloseDevice()
# ---- Main function (entry point for this script ) --------------------------
def main():
   device = open_ncs_device()
   graph = load_graph( device )
   img = pre_process_image()
   infer_image( graph, img )
   close_ncs_device( device, graph )
# ---- Define 'main' function as the entry point for this script -------------
if __name__ == '__main__':
   parser = argparse.ArgumentParser(
                        description="Image classifier using \
                        Intel? Movidius? Neural Compute Stick." )
   parser.add_argument( '-g', '--graph', type=str,
                        default='/WalabotRawNet/graph',
                        help="Absolute path to the neural network graph file." )
   parser.add_argument( '-i', '--image', type=str,
                        default='raw.jpg',
                        help="Absolute path to the image that needs to be inferred." )
   parser.add_argument( '-l', '--labels', type=str,
                        default='raw_classifies.txt',
                        help="Absolute path to labels file." )
   parser.add_argument( '-M', '--mean', type=float,
                        nargs='+',
                        default=[104.00698793, 116.66876762, 122.67891434],
                        help="',' delimited floating point values for image mean." )
   parser.add_argument( '-S', '--scale', type=float,
                        default=1,
                        help="Absolute path to labels file." )
   parser.add_argument( '-D', '--dim', type=int,
                        nargs='+',
                        default=[224, 224],
                        help="Image dimensions. ex. -D 224 224" )
   parser.add_argument( '-c', '--colormode', type=str,
                        default="BGR",
                        help="RGB vs BGR color sequence. TensorFlow = RGB, Caffe = BGR" )
   ARGS = parser.parse_args()
   main()
# ==== End of file =========================================================== 
?
pYYBAGNR486ActiDAAHMqkFzxJg383.png
walabot_raw_classification 截圖
?

第 6 步:解鎖鎖舌

硬件設(shè)置的最后一部分是鎖舌本身,我們必須使用 mraa 庫來設(shè)置它。我們先把 Grove Shield 放在 Up2 板上,如圖,然后安裝 mraa 庫

sudo add-apt-repository ppa:mraa/mraa
sudo apt-get update
sudo apt-get install libmraa1 libmraa-dev libmraa-java python-mraa python3-mraa node-mraa mraa-tools 

然后我們可以從https://github.com/intel-iot-devkit/mraa運行示例

理想情況下,我們可以直接從 Up2 板上運行它,但由于目前 GPIO 沒有將足夠的電流推到外面,我們可以做一個額外的步驟,將鎖添加到 arduino 并通過 mraa 控制它。

Arduino 端的代碼相當(dāng)簡單,只需接收 0 來鎖定,接收 1 來解鎖。這是通過USB(UART)通道發(fā)送的,使用起來很簡單。

const int ledPin =  7;      // the number of the LED pin
int incomingByte = 0;   // for incoming serial data
void setup() {
 // initialize the LED pin as an output:
 pinMode(ledPin, OUTPUT);
 Serial.begin(9600);     // opens serial port, sets data rate to 9600 bps
}
void loop() {
         // send data only when you receive data:
       if (Serial.available() > 0) {
               // read the incoming byte:
               incomingByte = Serial.read();
          if(incomingByte == 48)
          {
           digitalWrite(ledPin, LOW);
          }
          else if(incomingByte == 49)
          {
           digitalWrite(ledPin, HIGH);
          }
               // say what you got:
               Serial.print("I received: ");
               Serial.println(incomingByte, DEC);
       }
}
?

我們可以使用 Up2 板上的以下代碼測試鎖舌

import mraa
import time
import sys
mraa.addSubplatform(mraa.GROVEPI,"0")
# serial port
port = "/dev/ttyACM0"
data_on = "1"
data_off = "0"
# initialise UART
uart = mraa.Uart(port)
while True:
   uart.write(bytearray(data_on, 'utf-8'))
   print("on")
   time.sleep(3)
   uart.write(bytearray(data_off, 'utf-8'))
   print("off")
   time.sleep(3)

最后,我們可以將所有這部分集成到我們的主應(yīng)用程序中。

第七步:服務(wù)器數(shù)據(jù)存儲

為了跟蹤面部和 walabot 傳感器數(shù)據(jù),我們將數(shù)據(jù)存儲在云中是一個好主意。在這個例子中,我們在一個文件中設(shè)置了一個簡單的文件存儲,但將來我們可以將它存儲到 mongodb 中。

當(dāng)前的示例是一種非常簡單的概念證明形式,我們只跟蹤面部識別、walabot 距離和walabot 呼吸,所有這些都是布爾形式。當(dāng)alexa意識到所有這些都是真的時,它會標(biāo)記alexa解鎖。在更新服務(wù)器數(shù)據(jù)時,我們將獲取 alexa 標(biāo)志,以確定是否解鎖鎖舌。

對于這個例子,我們將使用 node.js 并通過heroku 托管。如果您想測試自己的 alexa,?

設(shè)置服務(wù)器后,使用附加的以下代碼作為您的基礎(chǔ)。您可以選擇托管在其他地方,例如 Amazon、Azure 或 IBM Bluemix;這只是一個啟動服務(wù)器并使其運行的快速示例。

我們正在根據(jù) UserId 保存文件,以便可以將其分開,將來我們可以為它建立一個數(shù)據(jù)庫。

const express = require('express')
const path = require('path')
const PORT = process.env.PORT || 5000
var fs = require('fs');
var PubNub = require('pubnub')
var app = express()
var http = require("http");
setInterval(function() {
   http.get("{your own url}/test");
}, 300000);
// respond with "hello world" when a GET request is made to the homepage
app.get('/', function (req, res) {
	fs.readFile('data.txt', 'utf8', function readFileCallback(err, data){
	    if (err){
	        console.log(err);
	    } else {
	    obj = JSON.parse(data); //now it an object
	    res.send(JSON.stringify(obj));
	}});	
})
app.get('/test', function (req, res) {
	/*
	fs.readFile('data.txt', 'utf8', function readFileCallback(err, data){
	    if (err){
	        console.log(err);
	    } else {
	    obj = JSON.parse(data); //now it an object
	    res.send(JSON.stringify(obj));
	}});*/
	res.send("200");
})
app.get('/input', function (req, res) 
{	var fs = require('fs');
	var faceid = req.query.faceid;
	var distance = req.query.distance;
	var breathing = req.query.breathing;
	fs.readFile('data.txt', 'utf8', function readFileCallback(err, data){
	    if (err){
	        console.log(err);
	    } else {
	    obj = JSON.parse(data); //now it an object
	    obj.faceid = parseInt(faceid);
	    obj.distance = parseInt(distance); //add some data
	    obj.breathing = parseInt(breathing); //add some data
	    json = JSON.stringify(obj); //convert it back to json
	    fs.writeFile('data.txt', json, 'utf8', null); // write it back
	    fs.readFile('alexa.txt', 'utf8', function readFileCallback(err, data){
		    if (err){
		        console.log(err);
		    } else {
		    obj = JSON.parse(data); //now it an object
		    json = JSON.stringify(obj); //convert it back to json
		    res.send(json) 
		}});
	}});
})
app.get('/alexa', function (req, res) 
{	var fs = require('fs');
	var alexa = 1;
	fs.readFile('alexa.txt', 'utf8', function readFileCallback(err, data){
	    if (err){
	        console.log(err);
	    } else {
	    obj = JSON.parse(data); //now it an object
	    obj.alexa = 1;
	    json = JSON.stringify(obj); //convert it back to json
	    fs.writeFile('alexa.txt', json, 'utf8', null); // write it back
		setTimeout(function() {
			//Reset back to lock mode after 10 seconds, enough for client side to unlock
			var obj = new Object()
		    obj.alexa = 0;
			json = JSON.stringify(obj); //convert it back to json
	    	fs.writeFile('alexa.txt', json, 'utf8', null); // write it back
		}, 10000);
	    res.send('success') 
	}});
})
app.listen(PORT, () => console.log(`Listening on ${ PORT }`))

一旦達(dá)到閾值,讓 Walabot 更新服務(wù)器。

if
    distance = 1
else distance = 0

第 8 步:設(shè)置 Alexa

用戶現(xiàn)在可以使用 Alexa 解鎖鎖舌。我們將按照本指南使用 Alexa 快速技能套件:https ://developer.amazon.com/alexa-skills-kit/alexa-skill-quick-start-tutorial

該指南將教您:

  • 在 AWS 上創(chuàng)建 Lambda 函數(shù)
  • 在 Alexa 技能上創(chuàng)建 Alexa 技能

Lambda 托管 Alexa 可以與之交互的無服務(wù)器函數(shù)。使用 node.js 而不是按照指南創(chuàng)建一個空的。我們可以從下面復(fù)制/粘貼 Alexa node.js 代碼。

?
poYBAGNR49GAbRKgAAIkqkT1P3Y034.png
lamdba 函數(shù)
?

創(chuàng)建函數(shù)后,我們將獲得 ARN 編號。把它寫下來,這樣我們就可以在 Alexa Skill 工具包的配置中使用它。我們還必須將 Alexa Skill 工具包添加到 AI Face Lock - 復(fù)制并粘貼整個 node.js 代碼,該代碼作為 LAMBDA 代碼附加,

目前的情報托管在 Alexa 中,它會檢查是否四處走動和是否經(jīng)常移動,例如起床。這樣我們就可以減輕服務(wù)器的負(fù)擔(dān)。

?
poYBAGNR49SARCcsAAEUvndngaM854.png
ARN 和
?

現(xiàn)在我們正在轉(zhuǎn)向 Alexa 技能套件:

?
pYYBAGNR49eAXl1QAACu84JItmE31.jpeg
創(chuàng)建 Alexa 技能集
?

在交互模型中,將以下鎖定意圖模式放在那里:

Intent Schema: 
{ 
"intents": [ 
  { 
    "intent": "AILockIntent" 
  }, 
  { 
    "intent": "AMAZON.HelpIntent" 
  } 
] 
} 
Sample Utterances: 
AILockIntent Unlock the bolt
AILockIntent Open the bolt

之后,在配置中,我們可以把我們之前使用的 ARN:

?
pYYBAGNR49mACeyZAAFwFLkZbOc939.png
將 ARN 放置在 ARN 端口上
?

?

?
poYBAGNR49yAIHPGAACVjiTyaMw524.png
亞歷克斯技能
?

第 9 步:“Alexa,詢問面部鎖以解鎖螺栓”

現(xiàn)在您可以通過詢問“Alexa,讓 Face Lock 解鎖螺栓”來測試您的 Alexa 技能。或者使用任何亞馬遜回聲來測試它

?
poYBAGNR49-AAUy_AAN4eFJicI0207.png
您可以在測試環(huán)境中使用它來查看
?

第 10 步:你完成了

大功告成,現(xiàn)在 AI 可以檢測 3 種場景,什么時候不是你,什么時候你在用照片或假裝自己,什么時候是你。

第 11 步:Android 部分

這是使 android 工作的額外步驟,我們將制作一個簡單的 Pubnub 應(yīng)用程序,當(dāng)我們在其他人激活應(yīng)用程序時收到警報時連接到 android,以便用戶可以流式傳輸?shù)剿麄兊木W(wǎng)絡(luò)攝像頭。我們使用 opentok 做簡單的網(wǎng)絡(luò)攝像頭集成

這是接收通知以及打開鎖的android代碼

import android.app.Notification;
import android.app.NotificationChannel;
import android.app.NotificationManager;
import android.app.PendingIntent;
import android.content.Context;
import android.content.Intent;
import android.graphics.Color;
import android.media.RingtoneManager;
import android.opengl.GLSurfaceView;
import android.os.Build;
import android.support.v4.app.NotificationCompat;
import android.support.v4.app.TaskStackBuilder;
import android.support.v7.app.AppCompatActivity;
import android.support.annotation.NonNull;
import android.Manifest;
import android.os.Bundle;
import android.util.Log;
import android.widget.FrameLayout;
import android.app.AlertDialog;
import android.content.DialogInterface;
import android.widget.Toast;
import com.opentok.android.Session;
import com.opentok.android.Stream;
import com.opentok.android.Publisher;
import com.opentok.android.PublisherKit;
import com.opentok.android.Subscriber;
import com.opentok.android.BaseVideoRenderer;
import com.opentok.android.OpentokError;
import com.opentok.android.SubscriberKit;
import com.pubnub.api.PNConfiguration;
import com.pubnub.api.PubNub;
import com.pubnub.api.callbacks.PNCallback;
import com.pubnub.api.callbacks.SubscribeCallback;
import com.pubnub.api.enums.PNStatusCategory;
import com.pubnub.api.models.consumer.PNPublishResult;
import com.pubnub.api.models.consumer.PNStatus;
import com.pubnub.api.models.consumer.pubsub.PNMessageResult;
import com.pubnub.api.models.consumer.pubsub.PNPresenceEventResult;
import com.tokbox.android.tutorials.basicvideochat.R;
import java.util.Arrays;
import java.util.List;
import pub.devrel.easypermissions.AfterPermissionGranted;
import pub.devrel.easypermissions.AppSettingsDialog;
import pub.devrel.easypermissions.EasyPermissions;
public class MainActivity extends AppCompatActivity
                            implements EasyPermissions.PermissionCallbacks,
                                        WebServiceCoordinator.Listener,
                                        Session.SessionListener,
                                        PublisherKit.PublisherListener,
                                        SubscriberKit.SubscriberListener{
    private static final String LOG_TAG = MainActivity.class.getSimpleName();
    private static final int RC_SETTINGS_SCREEN_PERM = 123;
    private static final int RC_VIDEO_APP_PERM = 124;
    // Suppressing this warning. mWebServiceCoordinator will get GarbageCollected if it is local.
    @SuppressWarnings("FieldCanBeLocal")
    private WebServiceCoordinator mWebServiceCoordinator;
    private Session mSession;
    private Publisher mPublisher;
    private Subscriber mSubscriber;
    private FrameLayout mPublisherViewContainer;
    private FrameLayout mSubscriberViewContainer;
    @Override
    protected void onCreate(Bundle savedInstanceState) {
        Log.d(LOG_TAG, "onCreate");
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);
        // initialize view objects from your layout
        mPublisherViewContainer = (FrameLayout)findViewById(R.id.publisher_container);
        mSubscriberViewContainer = (FrameLayout)findViewById(R.id.subscriber_container);
        requestPermissions();
        PNConfiguration pnConfiguration = new PNConfiguration();
        pnConfiguration.setSubscribeKey("sub-c-777d4466-c823-11e6-b045-02ee2ddab7fe");
        pnConfiguration.setPublishKey("pub-c-99f0375f-cc13-46fb-9b30-d1772c531f3a");
        PubNub pubnub = new PubNub(pnConfiguration);
        pubnub.addListener(new SubscribeCallback() {
            @Override
            public void status(PubNub pubnub, PNStatus status) {
                if (status.getCategory() == PNStatusCategory.PNUnexpectedDisconnectCategory) {
                    // This event happens when radio / connectivity is lost
                }
                else if (status.getCategory() == PNStatusCategory.PNConnectedCategory) {
                    // Connect event. You can do stuff like publish, and know you'll get it.
                    // Or just use the connected event to confirm you are subscribed for
                    // UI / internal notifications, etc
                    /*
                    if (status.getCategory() == PNStatusCategory.PNConnectedCategory){
                        pubnub.publish().channel("awesomeChannel").message("hello!!").async(new PNCallback() {
                            @Override
                            public void onResponse(PNPublishResult result, PNStatus status) {
                                // Check whether request successfully completed or not.
                                if (!status.isError()) {
                                    // Message successfully published to specified channel.
                                }
                                // Request processing failed.
                                else {
                                    // Handle message publish error. Check 'category' property to find out possible issue
                                    // because of which request did fail.
                                    //
                                    // Request can be resent using: [status retry];
                                }
                            }
                        });
                    }*/
                }
                else if (status.getCategory() == PNStatusCategory.PNReconnectedCategory) {
                    // Happens as part of our regular operation. This event happens when
                    // radio / connectivity is lost, then regained.
                }
                else if (status.getCategory() == PNStatusCategory.PNDecryptionErrorCategory) {
                    // Handle messsage decryption error. Probably client configured to
                    // encrypt messages and on live data feed it received plain text.
                }
            }
            @Override
            public void message(PubNub pubnub, PNMessageResult message) {
                // Handle new message stored in message.message
                if (message.getChannel() != null) {
                    // Message has been received on channel group stored in
                    // message.getChannel()
                    Log.e("doh", "Doh");
                    NotificationManager mNotificationManager =
                            (NotificationManager) getSystemService(Context.NOTIFICATION_SERVICE);
                    if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O) {
                        NotificationChannel notificationChannel = new NotificationChannel("facelock", "My Notifications", NotificationManager.IMPORTANCE_DEFAULT);
                        // Configure the notification channel.
                        notificationChannel.setDescription("Channel description");
                        notificationChannel.enableLights(true);
                        notificationChannel.setLightColor(Color.RED);
                        notificationChannel.setVibrationPattern(new long[]{0, 1000, 500, 1000});
                        notificationChannel.enableVibration(true);
                        mNotificationManager.createNotificationChannel(notificationChannel);
                    }
                    Notification.Builder mBuilder =
                            new Notification.Builder(MainActivity.this, "facelock")
                                    .setSmallIcon(R.mipmap.ic_launcher_small)
                                    .setContentTitle("Face lock")
                                    .setContentText("Face lock is detecting unusual activity, click to see security cam.");
                    Intent notificationIntent = new Intent(MainActivity.this, MainActivity.class);
                    notificationIntent.setFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP
                            | Intent.FLAG_ACTIVITY_SINGLE_TOP);
                    PendingIntent intent = PendingIntent.getActivity(MainActivity.this, 0,
                            notificationIntent, 0);
                    mBuilder.setContentIntent(intent);
                    mNotificationManager.notify(001, mBuilder.build());
//                    Intent intent = new Intent(MainActivity.this, MainActivity.class);
//                    MainActivity.this.startActivity(intent);
                }
                else {
                    // Message has been received on channel stored in
                    // message.getSubscription()
                }
            /*
                log the following items with your favorite logger
                    - message.getMessage()
                    - message.getSubscription()
                    - message.getTimetoken()
            */
            }
            @Override
            public void presence(PubNub pubnub, PNPresenceEventResult presence) {
            }
        });
        pubnub.subscribe().channels(Arrays.asList("facelock")).execute();
    }
     /* Activity lifecycle methods */
    @Override
    protected void onPause() {
        Log.d(LOG_TAG, "onPause");
        super.onPause();
        if (mSession != null) {
            mSession.onPause();
        }
    }
    @Override
    protected void onResume() {
        Log.d(LOG_TAG, "onResume");
        super.onResume();
        if (mSession != null) {
            mSession.onResume();
        }
    }
    @Override
    public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) {
        super.onRequestPermissionsResult(requestCode, permissions, grantResults);
        EasyPermissions.onRequestPermissionsResult(requestCode, permissions, grantResults, this);
    }
    @Override
    public void onPermissionsGranted(int requestCode, List perms) {
        Log.d(LOG_TAG, "onPermissionsGranted:" + requestCode + ":" + perms.size());
    }
    @Override
    public void onPermissionsDenied(int requestCode, List perms) {
        Log.d(LOG_TAG, "onPermissionsDenied:" + requestCode + ":" + perms.size());
        if (EasyPermissions.somePermissionPermanentlyDenied(this, perms)) {
            new AppSettingsDialog.Builder(this)
                    .setTitle(getString(R.string.title_settings_dialog))
                    .setRationale(getString(R.string.rationale_ask_again))
                    .setPositiveButton(getString(R.string.setting))
                    .setNegativeButton(getString(R.string.cancel))
                    .setRequestCode(RC_SETTINGS_SCREEN_PERM)
                    .build()
                    .show();
        }
    }
    @AfterPermissionGranted(RC_VIDEO_APP_PERM)
    private void requestPermissions() {
        String[] perms = { Manifest.permission.INTERNET, Manifest.permission.CAMERA, Manifest.permission.RECORD_AUDIO };
        if (EasyPermissions.hasPermissions(this, perms)) {
            // if there is no server URL set
            if (OpenTokConfig.CHAT_SERVER_URL == null) {
                // use hard coded session values
                if (OpenTokConfig.areHardCodedConfigsValid()) {
                    initializeSession(OpenTokConfig.API_KEY, OpenTokConfig.SESSION_ID, OpenTokConfig.TOKEN);
                } else {
                    showConfigError("Configuration Error", OpenTokConfig.hardCodedConfigErrorMessage);
                }
            } else {
                // otherwise initialize WebServiceCoordinator and kick off request for session data
                // session initialization occurs once data is returned, in onSessionConnectionDataReady
                if (OpenTokConfig.isWebServerConfigUrlValid()) {
                    mWebServiceCoordinator = new WebServiceCoordinator(this, this);
                    mWebServiceCoordinator.fetchSessionConnectionData(OpenTokConfig.SESSION_INFO_ENDPOINT);
                } else {
                    showConfigError("Configuration Error", OpenTokConfig.webServerConfigErrorMessage);
                }
            }
        } else {
            EasyPermissions.requestPermissions(this, getString(R.string.rationale_video_app), RC_VIDEO_APP_PERM, perms);
        }
    }
    private void initializeSession(String apiKey, String sessionId, String token) {
        mSession = new Session.Builder(this, apiKey, sessionId).build();
        mSession.setSessionListener(this);
        mSession.connect(token);
    }
    /* Web Service Coordinator delegate methods */
    @Override
    public void onSessionConnectionDataReady(String apiKey, String sessionId, String token) {
        Log.d(LOG_TAG, "ApiKey: "+apiKey + " SessionId: "+ sessionId + " Token: "+token);
        initializeSession(apiKey, sessionId, token);
    }
    @Override
    public void onWebServiceCoordinatorError(Exception error) {
        Log.e(LOG_TAG, "Web Service error: " + error.getMessage());
        Toast.makeText(this, "Web Service error: " + error.getMessage(), Toast.LENGTH_LONG).show();
        finish();
    }
    /* Session Listener methods */
    @Override
    public void onConnected(Session session) {
        Log.d(LOG_TAG, "onConnected: Connected to session: "+session.getSessionId());
        // initialize Publisher and set this object to listen to Publisher events
        mPublisher = new Publisher.Builder(this).build();
        mPublisher.setPublisherListener(this);
        // set publisher video style to fill view
        mPublisher.getRenderer().setStyle(BaseVideoRenderer.STYLE_VIDEO_SCALE,
                BaseVideoRenderer.STYLE_VIDEO_FILL);
        mPublisherViewContainer.addView(mPublisher.getView());
        if (mPublisher.getView() instanceof GLSurfaceView) {
            ((GLSurfaceView) mPublisher.getView()).setZOrderOnTop(true);
        }
        mSession.publish(mPublisher);
    }
    @Override
    public void onDisconnected(Session session) {
        Log.d(LOG_TAG, "onDisconnected: Disconnected from session: "+session.getSessionId());
    }
    @Override
    public void onStreamReceived(Session session, Stream stream) {
        Log.d(LOG_TAG, "onStreamReceived: New Stream Received "+stream.getStreamId() + " in session: "+session.getSessionId());
        if (mSubscriber == null) {
            mSubscriber = new Subscriber.Builder(this, stream).build();
            mSubscriber.getRenderer().setStyle(BaseVideoRenderer.STYLE_VIDEO_SCALE, BaseVideoRenderer.STYLE_VIDEO_FILL);
            mSubscriber.setSubscriberListener(this);
            mSession.subscribe(mSubscriber);
            mSubscriberViewContainer.addView(mSubscriber.getView());
        }
    }
    @Override
    public void onStreamDropped(Session session, Stream stream) {
        Log.d(LOG_TAG, "onStreamDropped: Stream Dropped: "+stream.getStreamId() +" in session: "+session.getSessionId());
        if (mSubscriber != null) {
            mSubscriber = null;
            mSubscriberViewContainer.removeAllViews();
        }
    }
    @Override
    public void onError(Session session, OpentokError opentokError) {
        Log.e(LOG_TAG, "onError: "+ opentokError.getErrorDomain() + " : " +
                opentokError.getErrorCode() + " - "+opentokError.getMessage() + " in session: "+ session.getSessionId());
        showOpenTokError(opentokError);
    }
    /* Publisher Listener methods */
    @Override
    public void onStreamCreated(PublisherKit publisherKit, Stream stream) {
        Log.d(LOG_TAG, "onStreamCreated: Publisher Stream Created. Own stream "+stream.getStreamId());
    }
    @Override
    public void onStreamDestroyed(PublisherKit publisherKit, Stream stream) {
        Log.d(LOG_TAG, "onStreamDestroyed: Publisher Stream Destroyed. Own stream "+stream.getStreamId());
    }
    @Override
    public void onError(PublisherKit publisherKit, OpentokError opentokError) {
        Log.e(LOG_TAG, "onError: "+opentokError.getErrorDomain() + " : " +
                opentokError.getErrorCode() +  " - "+opentokError.getMessage());
        showOpenTokError(opentokError);
    }
    @Override
    public void onConnected(SubscriberKit subscriberKit) {
        Log.d(LOG_TAG, "onConnected: Subscriber connected. Stream: "+subscriberKit.getStream().getStreamId());
    }
    @Override
    public void onDisconnected(SubscriberKit subscriberKit) {
        Log.d(LOG_TAG, "onDisconnected: Subscriber disconnected. Stream: "+subscriberKit.getStream().getStreamId());
    }
    @Override
    public void onError(SubscriberKit subscriberKit, OpentokError opentokError) {
        Log.e(LOG_TAG, "onError: "+opentokError.getErrorDomain() + " : " +
                opentokError.getErrorCode() +  " - "+opentokError.getMessage());
        showOpenTokError(opentokError);
    }
    private void showOpenTokError(OpentokError opentokError) {
        Toast.makeText(this, opentokError.getErrorDomain().name() +": " +opentokError.getMessage() + " Please, see the logcat.", Toast.LENGTH_LONG).show();
        finish();
    }
    private void showConfigError(String alertTitle, final String errorMessage) {
        Log.e(LOG_TAG, "Error " + alertTitle + ": " + errorMessage);
        new AlertDialog.Builder(this)
                .setTitle(alertTitle)
                .setMessage(errorMessage)
                .setPositiveButton("ok", new DialogInterface.OnClickListener() {
                    public void onClick(DialogInterface dialog, int which) {
                        MainActivity.this.finish();
                    }
                })
                .setIcon(android.R.drawable.ic_dialog_alert)
                .show();
    }
}

服務(wù)器端很簡單,我們只需要更新我們的 lambda 代碼

'use strict';
var http = require('https'); 
var PubNub = require('pubnub')
exports.handler = function (event, context) {
   try {
       console.log("event.session.application.applicationId=" + event.session.application.applicationId);
       /**
        * Uncomment this if statement and populate with your skill's application ID to
        * prevent someone else from configuring a skill that sends requests to this function.
        */
    if (event.session.application.applicationId !== "amzn1.ask.skill.645f001e-5ea6-49b3-90ef-a0d9c0ef25a1") {
        context.fail("Invalid Application ID");
     }
       if (event.session.new) {
           onSessionStarted({requestId: event.request.requestId}, event.session);
       }
       if (event.session.user.accessToken == undefined) {
               var cardTitle = "Welcome to AI Face Lock"
               var speechOutput = "Your axcount is not linked, to start using this skill, please use the companion app to authenticate on Amazon"
               buildSpeechletResponse(cardTitle, speechOutput, "", true);
       }
       if (event.request.type === "LaunchRequest") {
           onLaunch(event.request,
               event.session,
               function callback(sessionAttributes, speechletResponse) {
                   context.succeed(buildResponse(sessionAttributes, speechletResponse));
               });
       } else if (event.request.type === "IntentRequest") {
           onIntent(event.request,
               event.session,
               function callback(sessionAttributes, speechletResponse) {
                   context.succeed(buildResponse(sessionAttributes, speechletResponse));
               });
       } else if (event.request.type === "SessionEndedRequest") {
           onSessionEnded(event.request, event.session);
           context.succeed();
       }
   } catch (e) {
       context.fail("Exception: " + e);
   }
};
/**
* Called when the session starts.
*/
function onSessionStarted(sessionStartedRequest, session) {
   console.log("onSessionStarted requestId=" + sessionStartedRequest.requestId
       + ", sessionId=" + session.sessionId);
   // add any session init logic here
}
/**
* Called when the user invokes the skill without specifying what they want.
*/
function onLaunch(launchRequest, session, callback) {
   console.log("onLaunch requestId=" + launchRequest.requestId
       + ", sessionId=" + session.sessionId);
   var cardTitle = "Welcome to AI Face Lock"
   var speechOutput = "Welcome to AI Face Lock"
   callback(session.attributes,
       buildSpeechletResponse(cardTitle, speechOutput, "", false));
}
/**
* Called when the user specifies an intent for this skill.
*/
function onIntent(intentRequest, session, callback) {
   console.log("onIntent requestId=" + intentRequest.requestId
       + ", sessionId=" + session.sessionId);
   var intent = intentRequest.intent,
       intentName = intentRequest.intent.name;
   // dispatch custom intents to handlers here
   if (intentName == 'AILockIntent') {
       handleTrackRequest(intent, session, callback);
   }
   else if(intentName == 'AMAZON.HelpIntent')
   {
       callback(session.attributes, buildSpeechletResponseWithoutCard("Please follow hackter.io guide and build out the Face Lock and unlock your bolt, afterwards, just ask face lock to unlock the deadbolt", "", false));
       //buildSpeechletResponseWithoutCard("Please follow hackter.io guide and build out the Face Lock and unlock your bolt", "", false);
   }
   else if (intentName =='AMAZON.CancelIntent' || intentName == 'AMAZON.StopIntent')
   {
       callback(session.attributes, buildSpeechletResponseWithoutCard("Exiting AI Face Lock", "", true));
       //buildSpeechletResponseWithoutCard("Exiting AI Face Lock", "", false);
   }
   else {
       throw "Invalid intent";
   }
}
/**
* Called when the user ends the session.
* Is not called when the skill returns shouldEndSession=true.
*/
function onSessionEnded(sessionEndedRequest, session) {
   console.log("onSessionEnded requestId=" + sessionEndedRequest.requestId
       + ", sessionId=" + session.sessionId);
   // Add any cleanup logic here
}
function handleTrackRequest(intent, session, callback) {
   var url = "https://murmuring-bayou-68628.herokuapp.com/"; //you can use your own
               http.get(url, function(res){ 
                   res.setEncoding('utf8');
                   res.on('data', function (chunk) {
                       console.log('BODY: ' + chunk);
                       var chunk = JSON.parse(chunk);
                       var pubnub = new PubNub({
                           publishKey : '{your own key}',
                           subscribeKey : '{your own key}'
                       })
                       var publishConfig = {
                           channel : "facelock",
                           message : {
                               title: "Face lock",
                               description: "Face lock is detecting unusual activity, click to see security cam."
                           }
                       };
                       if(parseInt(chunk.faceid) == 0)
                       {
                           callback(session.attributes, buildSpeechletResponseWithoutCard("Face lock doesn't recognize any user around", "", "true"));
                           pubnub.publish(publishConfig, function(status, response) {
                                   console.log(status, response);
                           });
                       }
                       else if (parseInt(chunk.distance) == 0 || parseInt(chunk.breahting) == 0)
                       {
                           callback(session.attributes, buildSpeechletResponseWithoutCard("Walabot is not detecting people's presence", "", "true"));
                           pubnub.publish(publishConfig, function(status, response) {
                                   console.log(status, response);
                           });
                       }
                       else
                       {   
                           var urlalexa = "https://murmuring-bayou-68628.herokuapp.com/alexafalse"; //you can use your own
                           http.get(urlalexa, function(res1){ 
                               res1.setEncoding('utf8');
                               res1.on('data', function (chunk1) {
                                   console.log('BODY: ' + chunk1);
                               })})
                           callback(session.attributes, buildSpeechletResponseWithoutCard("Unlocking deadbolt...", "", "true"));
                       }
                   })
               }).on('error', function (e) { 
                       callback(session.attributes, buildSpeechletResponseWithoutCard("There was a problem Connecting to your AI Lock", "", "true"));
               })
   //callback(session.attributes, buildSpeechletResponseWithoutCard("test", "", "true"));
   //callback(session.attributes, buildSpeechletResponseWithoutCard("Face lock doesn't see you around", "", "true"));
}
// ------- Helper functions to build responses -------
function buildSpeechletResponse(title, output, repromptText, shouldEndSession) {
   return {
       outputSpeech: {
           type: "PlainText",
           text: output
       },
       card: {
           type: "Simple",
           title: title,
           content: output
       },
       reprompt: {
           outputSpeech: {
               type: "PlainText",
               text: repromptText
           }
       },
       shouldEndSession: shouldEndSession
   };
}
function buildSpeechletResponseWithoutCard(output, repromptText, shouldEndSession) {
   return {
       outputSpeech: {
           type: "PlainText",
           text: output
       },
       reprompt: {
           outputSpeech: {
               type: "PlainText",
               text: repromptText
           }
       },
       shouldEndSession: shouldEndSession
   };
}
function buildResponse(sessionAttributes, speechletResponse) {
   return {
       version: "1.0",
       sessionAttributes: sessionAttributes,
       response: speechletResponse
   };
}

物聯(lián)網(wǎng)方面,在網(wǎng)頁上注入 tokbox 代碼,它應(yīng)該按如下方式工作

?

?

?


下載該資料的人也在下載 下載該資料的人還在閱讀
更多 >

評論

查看更多

下載排行

本周

  1. 1山景DSP芯片AP8248A2數(shù)據(jù)手冊
  2. 1.06 MB  |  532次下載  |  免費
  3. 2RK3399完整板原理圖(支持平板,盒子VR)
  4. 3.28 MB  |  339次下載  |  免費
  5. 3TC358743XBG評估板參考手冊
  6. 1.36 MB  |  330次下載  |  免費
  7. 4DFM軟件使用教程
  8. 0.84 MB  |  295次下載  |  免費
  9. 5元宇宙深度解析—未來的未來-風(fēng)口還是泡沫
  10. 6.40 MB  |  227次下載  |  免費
  11. 6迪文DGUS開發(fā)指南
  12. 31.67 MB  |  194次下載  |  免費
  13. 7元宇宙底層硬件系列報告
  14. 13.42 MB  |  182次下載  |  免費
  15. 8FP5207XR-G1中文應(yīng)用手冊
  16. 1.09 MB  |  178次下載  |  免費

本月

  1. 1OrCAD10.5下載OrCAD10.5中文版軟件
  2. 0.00 MB  |  234315次下載  |  免費
  3. 2555集成電路應(yīng)用800例(新編版)
  4. 0.00 MB  |  33566次下載  |  免費
  5. 3接口電路圖大全
  6. 未知  |  30323次下載  |  免費
  7. 4開關(guān)電源設(shè)計實例指南
  8. 未知  |  21549次下載  |  免費
  9. 5電氣工程師手冊免費下載(新編第二版pdf電子書)
  10. 0.00 MB  |  15349次下載  |  免費
  11. 6數(shù)字電路基礎(chǔ)pdf(下載)
  12. 未知  |  13750次下載  |  免費
  13. 7電子制作實例集錦 下載
  14. 未知  |  8113次下載  |  免費
  15. 8《LED驅(qū)動電路設(shè)計》 溫德爾著
  16. 0.00 MB  |  6656次下載  |  免費

總榜

  1. 1matlab軟件下載入口
  2. 未知  |  935054次下載  |  免費
  3. 2protel99se軟件下載(可英文版轉(zhuǎn)中文版)
  4. 78.1 MB  |  537798次下載  |  免費
  5. 3MATLAB 7.1 下載 (含軟件介紹)
  6. 未知  |  420027次下載  |  免費
  7. 4OrCAD10.5下載OrCAD10.5中文版軟件
  8. 0.00 MB  |  234315次下載  |  免費
  9. 5Altium DXP2002下載入口
  10. 未知  |  233046次下載  |  免費
  11. 6電路仿真軟件multisim 10.0免費下載
  12. 340992  |  191187次下載  |  免費
  13. 7十天學(xué)會AVR單片機與C語言視頻教程 下載
  14. 158M  |  183279次下載  |  免費
  15. 8proe5.0野火版下載(中文版免費下載)
  16. 未知  |  138040次下載  |  免費