The current state of Edge Computing

I'm always have an eye on what's edge computing's going on because I'm a fan in IoT. Honestly, I start doing my CS major with IoT projects though it was very dump.(listed is not my first dump project hhhhh)

I prepared to do sth in SHIFT, also congratulations to Prof. Yang's recent publication:
Multi-tier computing networks for intelligent IoT. But the S3L first respond to me, So I'm a security guy right now.

I've been upon thinking the idea for a while, the idea of the problem is very similar to the state of art key problems.

1. Computing power: data processing equipment is no longer a rack server, how to ensure that the performance meets the requirements

2. Power consumption: power consumption cannot be as large as the level that ordinary civil power is difficult to accept, and power consumption also means that the heat is large

3. Stability: deployment outside causes the difficulty of field maintenance to increase dramatically The improvement of stability also means the reduction of maintenance cost, which also includes the harsh environment on the user side, such as high temperature, humidity, corrosive gas, etc.

4. Cost: only the cost can cover the demand, can we deploy and meet the customer demand as much as possible, if the cost is not comparable to the network + data center, it is meaningless

Moore's law has met with a bottleneck. It is more and more difficult to make the best of both general and specific optimizations. At this time, the hardware coprocessor which integrates common AI algorithms directly in edge computing becomes the key to obtain high performance and low power consumption. A key threshold for power consumption is 6W TDP. Generally, in the design, the power consumption of the chip is less than 6W, and the fan can not be used with the heat sink. The absence of fans not only means the reduction of noise, but also means that the stability and maintainability are not affected by fan damage. In the front-end chip of edge computing class, horizon based on its self-developed computer architecture BPU has found a new balance point in various requirements. The equivalent calculation power of 4 tops provided by it has reached the calculation power of the top GPU two years ago, while the typical power consumption is only 2W, which means that not only fans are not needed, but also the whole machine can be installed in the metal case to avoid dust and corrosion caused by redundant holes.

When it comes to computing power, there is a big misunderstanding in the current industry, which often takes the peak computing power as the main index to measure the AI chip. But what we really need is the effective computing power and the algorithm performance of its output. This needs to be measured from four dimensions: the peak computing power per watt and the peak computing power per dollar (determined by chip architecture, front and rear end design and chip technology), the effective utilization rate of peak computing power (determined by algorithm and chip architecture), and the ratio of effective computing power to AI performance (mainly in terms of speed and precision, determined by algorithm). RESNET was widely used in the industry before, but today we use a smaller model with more sophisticated design like mobilenet, which can achieve the same accuracy and speed with 1 / 10 of the calculation force. However, these ingenious design algorithms bring huge challenges to the computing architecture, which often make the effective utilization rate of the traditional design of the computing architecture greatly reduced, and from the perspective of the final AI performance, even more than worth the loss. The biggest feature of horizon is to predict the development trend of key algorithms in important application scenarios, and to integrate its computing features into the design of computing architecture prospectively, so that the AI processor can still adapt to the latest mainstream algorithm after one or two years of research and development. Therefore, compared with other typical AI processors, horizon's AI processor, along with the evolution trend of the algorithm, has always been able to maintain a fairly high effective utilization rate, so as to truly benefit from the advantages brought by algorithm innovation. Horizon also optimizes the compiler's instruction sequence. After optimization, the peak effective rights are increased by 85%. This makes the processing speed of the chip increased by 2.5 times, or the power consumption reduced to 40% when processing the same number of tasks. Another feature of horizon BPU is that it can be better integrated with sensors in the field. Video often requires huge bandwidth. 1080p @ 30fps video has a bandwidth of 1.5gbit/s from camera to chip. And horizon BPU can complete the video input, field target detection, tracking and recognition at the same time, so that all necessary work can be completed on site. Both the journey series applied to intelligent driving and the sunrise series applied to the intelligent Internet of things can easily cope with the huge bandwidth and processing capacity of the scene. More importantly, the common AI calculation can be completed in 30ms. It makes the applications that are extremely sensitive to time delay become reality gradually, such as automatic driving, recognition of lane lines, pedestrians, vehicles, obstacles and so on. If the time delay is too large or unpredictable, it will cause accidents.

However, by using sunrise BPU, AI calculation can be completed within predictable time delay, which can make the development of automatic driving more convenient. The application of edge computing has been limited by the performance of computing, the strict limitation of sensors and power consumption since it was put forward, and the development of edge computing is slow. And horizon BPU series chips, seeking a new balance in function and performance, can also effectively help edge computing applications to be more easily deployed to the site, so that all kinds of Internet of things applications can more effectively serve everyone.

credit: https://www.zhihu.com/question/274787680

tasker 安卓上最好用的iot missioner

    相信大家都有听过苹果的Workflow功能,这个功能可以允许用户配置特定的工作流,来实现一系列的功能,极大程度地减少操作成本。但今天的主角并不是Workflow,而是安卓端的Tasker,后者在功能上拥有更为自由的组合空间。

懒人神器 Tasker三步教你体验智能生活
懒人神器 Tasker三步教你体验智能生活

苹果Workflow和Tasker界面

    Tasker的界面非常简洁,主界面分为三类,配置文件、任务、场景。今天我们的目标主要是一些触发型操作,所以就着重讲一下配置文件。

    Tasker使用流程非常简单,类似于多米诺骨牌一样,你设定的触发条件就是第一块牌,预期结果就是最后一张。而多米诺骨牌中间的牌就是你加的限定条件,条件越多,你的使用场景就越细化。

    预期场景:插上耳机,手机会自动启动网易云音乐

    需求条件:手机状态检测到耳机接入

懒人神器 浅析自动化操作应用Tasker


创建条件

    第一步:点击加号添加条件,因为我们的需求是手机检测到耳机插入,也就是手机的状态发生了变化(由扬声器输出转为耳机输出),所以我们在大类中选择状态这个选项。

懒人神器 Tasker三步教你体验智能生活

手机状态变化的可选项

    第二步:可以看见Tasker对状态的类别划分还是相当细致的,但是我们这里的耳机输出属于硬件上的变化,所以直接选择硬件状态就OK了。

懒人神器 浅析自动化操作应用Tasker


创建任务

    第三步:点击创建任务,我们这里预期的结果是启动网易云,所以我们直接选择程序中的启动一栏就可以了,这样我们就可以通过手机状态的变化来启动网易云音乐

    只需要通过上面轻轻松松的几步,你就可以完成一个简单的自动化流程。

懒人神器 浅析自动化操作应用Tasker


插入耳机启动网易云音乐

    这是Tasker最简单的触发功能,能够实现简单的生活场景自动化。其实用户还可以对中间的条件加入更多的限制,比如说添加时间限制,地点限制。通过这样的设置用户能够定制属于自己的工作流,提高效率。以笔者上班一整天忘记打卡的惨痛教训为例,笔者希望能有人提醒我在上下班的时候打卡。

    针对这个需求,笔者以改变Tasker网络状态为条件,在我的手机到公司自动连接上Wifi的时候,Tasker会给我发送一条提前预设的短信提醒我打卡。但是这样很容易在断开重连时造成重复发送短信的情况,所以我们还需要加入一点限制。因此我决定用时间来进行限制,规定了只有在8点到9点这个区间这个预设才会生效。这样,一个高效的提醒助手就诞生啦。

    整体流程与网易云音乐的实现如出一辙,只是在条件与限定条件上发生了变化。由此可见,根据不同的设定条件,我们可以实现我们想要的任何情景功能。

它可比ifttt反应快多了。所以她还能做验证码自动转发。

[RTthread] RTthread 启动流程&内存管理

RTthread启动流程

代码中的 msg_ptr 指针指向的 128 字节内存空间位于动态内存堆空间中。 而一些全局变量则是存放于 RW 段和 ZI 段中,RW 段存放的是具有初始值的全局变量(而常量形式 的全局变量则放置在 RO 段中,是只读属性的),ZI 段存放的系统未初始化的全局变量,如下面的例子:

#include <rtthread.h> 
const static rt_uint32_t sensor_enable = 0x000000FE; 
rt_uint32_t sensor_value; 
rt_bool_t sensor_inited = RT_FALSE;
void sensor_init() { /* ... */ } 

自动化机制

int rt_hw_usart_init(void) /* 串 口 初 始 化 函 数 */
{... .../* 注 册 串 口 1 设 备 */
rt_hw_serial_register(&serial1, "uart1",RT_DEVICE_FLAG_RDWR |RT_DEVICE_FLAG_INT_RX,uart);
return 0;
}
INIT_BOARD_EXPORT(rt_hw_usart_init); /* 使 用 组 件 自 动 初 始 化 机 制 */

示例代码最后的 INIT_BOARD_EXPORT(rt_hw_usart_init) 表示使用自动初始化功能,按照这种 方式,rt_hw_usart_init() 函数就会被系统自动调用,那么它是在哪里被调用的呢? 在系统启动流程图中,有两个函数:rt_components_board_init() 与 rt_components_init(),其后的 带底色方框内部的函数表示被自动初始化的函数,其中:

1. “board init functions” 为所有通过 INIT_BOARD_EXPORT(fn) 申明的初始化函数。

2. “pre-initialization functions” 为所有通过 INIT_PREV_EXPORT(fn) 申明的初始化函数。

3. “device init functions” 为所有通过 INIT_DEVICE_EXPORT(fn) 申明的初始化函数。

4. “components init functions” 为所有通过 INIT_COMPONENT_EXPORT(fn) 申明的初始化函数。

5. “enviroment init functions” 为所有通过 INIT_ENV_EXPORT(fn) 申明的初始化函数。

6. “application init functions” 为所有通过 INIT_APP_EXPORT(fn) 申明的初始化函数。

内核对象继承关系

[LiteOS] usb-otg与stlink接口详解

每当接板子的时候都会有一个困惑,需不需要买下载器,抑或直接插到板子上就可以。

IoT Board 开发板

这块板子为例。右边有两个口,分别对应usb-otg与stlink。

从华为liteos doc是这么说的

USB OTG的工作原理
  OTG补充规范对USB 2.0的最重要的扩展是其更具节能性的电源管理和允许设备以主机和外设两种形式工作。OTG有两种设备类型:两用OTG设备(Dualrole device)和外设式OTG设备(Peripheralonly OTG device) 。两用OTG设备完全符合USB 2.0规范,同时,他还要提供有限的主机能力和一个MiniAB插座、支持主机流通协议(Host Negotiatio n Protocol, HNP),并和外设式OTG设备一样支持事务请求协议(Session Request Protocol, SRP)。当作为主机工作时,两用OTG设备可在总线上提供8 mA的电流,而以往标准主机则需要 提供100~500 mA的电流。
  2个两用OTG设备连接在一起时可交替以主机和从机的方式工作,这个特点兼容了现有USB 规范主机/外设的结构模型。OTG主机负责初始化数据通信的任务,比如:总线复位、获取USB 各种描述符和配置设备。这些配置完成后,2个OTG设备便可以分别以主机和从机方式传输信息,2个设备主从角色交换的过程由主机传输协议(HNP)定义。

1.1主机(Adevice)和从机(Bdevice)的初始功能
  设备的初始功能是通过定义连接器来实现的。OTG定义了一个叫做MiniAB的袖珍插孔,他能直接接入MiniA或者MiniB插口,MiniAB有一个ID引脚 上拉至电源端,MiniA插头有一个与地连接好的ID(R<10 Ω),Mini B插头有一个与地连接的开路ID引脚(R>100 kΩ)。当2个OTG设备连接到一起的时候 ,MiniA插头边的ID引脚会注入一个“0”状态,MiniB插头边的ID引脚为 “1”,ID为0的OTG设备默认为主机(Adevice),ID为1的OTG设备默认为从机(B device)。图1对上述内容进行了图解。

1.2对话请求协议SRP(Session Request Protocol)
  这个协议允许Adevice(可以是电池供电)在总线未使用时通过切断Vbus来节省电源消耗,也为Bdevice启动总线活动提供了一种方法。任何一个Adevice, 包括PC或便携式电脑,都可以响应SRP;任何一个Bdevice,包括一个标准USB外设, 都可以启动SRP;要求一个双重功能设备既能启动SRP,又能响应SRP。
1.3主机流通协议HNP(Host Negotiation Protocol)
  HNP是一种用来实现Adevice和Bdevice主机/从机转换的协议(实际上是电缆的反转)。主/从机功能交换的结果表现在下列过程中:
  (1)利用上拉电阻来发送信号给从机。
  (2)Adevice可在Bdevice上设置“HNP Enable”特性。
  (3)Bdevice断开上拉。
  (4)ADevice与上拉电阻相连,表明Adevice从属于从机。
  (5)Adevice给Vbus供电。
  (6)Bdevice检测Adevice的上拉。
  (7)复位/列举/使用Adevice。
1.4驱动程序
  与PC主机不同,便携式设备没有便捷的方式和足够的空间装载新的驱动程序。因此,OTG 规范要求每个两用OTG设备有一个支持的外设式OTG目标设备的列表,列表中包括设备的类型和制造商等信息。
  与PC机不同,OTG两用设备的驱动程序栈由USB主机栈和USB设备栈构成以满足两种工作方式的需要。OTG驱动程序通过连接器的不同或者是否有NHP交换设备的工作方式来决定使用USB主机栈还是USB设备栈。
  当OTG两用设备以主机方式工作时,USB主机栈工作。其中的主机控制器驱动程序负责USB 主机栈与硬件端点的数据交换,USB驱动程序枚举并保存设备的信息,目标外设主机类驱动程序支持目标设备列表里的设备。主机类驱动程序由芯片制造商提供,同时,OTG提供通用的主机类驱动程序(可以修改以用于非通用设备)。
  当OTG两用设备以从机方式工作时,USB设备栈工作。其中的设备控制器驱动程序负责USB 设备栈与硬件端点的数据交换,USB协议层负责处理USB协议规范,设备类驱动程序的功能取决于该两用设备的功能(如数码照相机、存储设备、打印机等)。
  OTG驱动程序负责处理两用OTG设备的工作方式转换,同时,他还可以返回其结果(如设备是否支持HNP)并处理总线错误。应用层程序通过OTG驱动程序开始或者结束一个传输事务, 通过USB主机栈或设备栈与硬件层交换数据。
1.5数据流模型
  OTG主机和设备被划分为功能层、USB设备层和USB接口层3个不同层次,如图2所示。

  USB接口层为OTG主机和OTG设备提供物理连接,USB系统软件使用主机控制器来管理主机与 USB设备的数据传输。USB系统软件相对于主机控制器而言,处理的是以客户角度观察的数据传输及客户与设备的交互。USB设备层为USB主机系统软件提供一个可用的逻辑设备。主机通过与之功能匹配的客户软件实现其各种功能。

USB OTG接口中有5条线:
2条用来传送数据(D+ 、D-);
1条是电源线(VBUS);
1条则是接地线(GND)、
1条是ID线。

ID线—以用于识别不同的电缆端点,mini-A插头(即A外设)中的ID引脚接地,mini-B插头(即B外设)中的ID引脚浮空。当OTG设备检测到接地的ID引脚时,表示默认的是A设备(主机),而检测到ID引脚浮空的设备则认为是B设备(外设)。

在这里插入图片描述

为了增加OTG的两用功能,必须扩充收发器功能来使OTG设备既可作为主机使用,也可以作为外设使用。而要实现上述功能,就需要在图3所示电路中添加D+和D-端的15kΩ下拉电阻并为VBUS提供供电电源。此外,收发器还需要具备以下三个条件:

(1)可切换D+/D-线上的上拉和下拉电阻,以提供外设和主机功能。

(2)作为A设备时,需要具有VBUS监视和供电电路;作为B设备初始化SRP时,需要监视和触发VBUS。

在这里插入图片描述

(3)具有ID输入引脚。
  

作为两用OTG设备,ASIC、DSP或其它与收发器连接的电路必须具备充当外设和主机的功能,并应按照HNP协议转换其角色。

收发器所需添加的大多数电路用于VBUS引脚的管理。作为主机,它必须能够提供5V、输出电流可达8mA的电源。图3中的模拟开关用于配置收发器的各种功能。

在这里插入图片描述

ASIC和控制器还必须包含USB主机逻辑控制功能,包括发送SOF(帧启动)包、发送配置u36755输入u36755输出数据包,在USB 1 msec帧内确定传输进程、发送USB复位信号、提供USB电源管理等。