Proposal for *A online systematic scheduling algorithm over Distributed IO Systems.*

In the resource allocation problem in the Distributed Systems under the High Performance Computer, we don't really know which device like disk, NIC (network interface) is more likely to be worn, or not currently on duty which may trigger delaying a while to get the data ready. The current solution is random or round robin scheduling algorithm in avoidance of wearing and dynamic routing for fastest speed. We can utilize the data collected to make it automatic.

Matured system administrator may know the pattern of the parameter to tweak like stride on the distributed File Systems, network MTUs for Infiniband card and the route to fetch the data. Currently, eBPF(extended Berkeley Packets Filter) can store those information like the IO latency on the storage node, network latency over the topology into the time series data. We can use these data to predict which topology and stride and other parameter may be the best way to seek data.

The data is online, and the prediction function can be online reinforce learning. Just like k-arm bandit, the reward can be the function of latency gains and device wearing parameter. The update data can be the real time latency for disks and networks. The information that gives to the RL bots can be where the data locate on disks, which data sought more frequently (DBMS query or random small files) and what frequency the disk make fail.

Benchmarks and evaluation can be the statistical gain of our systems latency and the overall disk wearing after the stress tests.

组里的几个邀请

上周、上上上周和上上上上周,我们迎来了组里的讲座,一个是新加坡国立做Software Analysis的Prof Visit。一个是用DTMC来解释音频的项目。The author XIAONING DU is now in Sydney Tech。另一个是在交大读的研究生,NTU 读的两面本和博。 The author Hongxu Chen

我对前者的直观感受就是,可能这个方向的博士不是很难拿吧,但deepstellar当时还是热点,现在有点凉,但证明能干一些事情了,她趁火拿了好几篇adv和benign。我对后者的感受,很强,我到博士不一定有其水平的一半。

RNN to DTMC


就是一坨统计堆出来的可解释性。
进一步抽象


这里用的公式\(\operatorname{Dist}\left(\hat{s}, \hat{s}^{\prime}\right)=\Sigma_{d=1}^{k}\left|I^{d}(\hat{s})-I^{d}\left(\hat{s}^{\prime}\right)\right|\)

最终的结果

最后用统计做了一堆相似性的bound “证明”,就说RNN聚类抽象出来的state 和DTMC 一一对应。想法很新颖,但其实回头想没什么内涵。

MUZZ

这里 Thread-aware 就很厉害。

相当于独自开个领域


可我事后问了他scalable的问题,他的回答:

go channel不行 java lock 有几个坑 lock threading 不能很准 java分析 fuzzing oracle 异常 jlang 方舟编译器。 scala native不靠谱 动态 gc llvm 不会做 jvm 抽象等级 z一致性 印度 chenhao 学术界。llvm uiuc 
爷 工业界 fuzzing ok2。

他还有几篇

坚定了我去美国找个工位的目标,加油~

和NUS大牛喝咖啡 | a coffee with big giant of NUS

greatness of big giant

I would say that none of a boy in the great university with a decent research background will be a median people. For a people who get NRF in singapore after just getting his Ph.D degree (equivalent to the QianRen Proposal in China), he's just great. Talking a little bit of his CV.

He graduated from IIT Mumbai and still has great research relation with IIT Kunpur. Both of the cities I have been to. I would say most of the people in IIT is super intelligent, but for lack of money, they tend to focus on the theory and mathematical proof. No exception of him and the guy I met in CRVF-2019.

What's his strength? I think he thinks things really fast and directly to the essence. For the SAT-solver part, with the pysuedo-code, he could quickly come up with the useful testcase to test the usability. I think great man should be equipped with insight. He obviously has it. For the EEsolver, a new and fast solver to find negative false bugs in programs. He insists to test the uniformity of the benchmarks. Proof is not just the benchmarks. To find the algorithm inside, we should

The current state of Edge Computing

I'm always have an eye on what's edge computing's going on because I'm a fan in IoT. Honestly, I start doing my CS major with IoT projects though it was very dump.(listed is not my first dump project hhhhh)

I prepared to do sth in SHIFT, also congratulations to Prof. Yang's recent publication:
Multi-tier computing networks for intelligent IoT. But the S3L first respond to me, So I'm a security guy right now.

I've been upon thinking the idea for a while, the idea of the problem is very similar to the state of art key problems.

1. Computing power: data processing equipment is no longer a rack server, how to ensure that the performance meets the requirements

2. Power consumption: power consumption cannot be as large as the level that ordinary civil power is difficult to accept, and power consumption also means that the heat is large

3. Stability: deployment outside causes the difficulty of field maintenance to increase dramatically The improvement of stability also means the reduction of maintenance cost, which also includes the harsh environment on the user side, such as high temperature, humidity, corrosive gas, etc.

4. Cost: only the cost can cover the demand, can we deploy and meet the customer demand as much as possible, if the cost is not comparable to the network + data center, it is meaningless

Moore's law has met with a bottleneck. It is more and more difficult to make the best of both general and specific optimizations. At this time, the hardware coprocessor which integrates common AI algorithms directly in edge computing becomes the key to obtain high performance and low power consumption. A key threshold for power consumption is 6W TDP. Generally, in the design, the power consumption of the chip is less than 6W, and the fan can not be used with the heat sink. The absence of fans not only means the reduction of noise, but also means that the stability and maintainability are not affected by fan damage. In the front-end chip of edge computing class, horizon based on its self-developed computer architecture BPU has found a new balance point in various requirements. The equivalent calculation power of 4 tops provided by it has reached the calculation power of the top GPU two years ago, while the typical power consumption is only 2W, which means that not only fans are not needed, but also the whole machine can be installed in the metal case to avoid dust and corrosion caused by redundant holes.

When it comes to computing power, there is a big misunderstanding in the current industry, which often takes the peak computing power as the main index to measure the AI chip. But what we really need is the effective computing power and the algorithm performance of its output. This needs to be measured from four dimensions: the peak computing power per watt and the peak computing power per dollar (determined by chip architecture, front and rear end design and chip technology), the effective utilization rate of peak computing power (determined by algorithm and chip architecture), and the ratio of effective computing power to AI performance (mainly in terms of speed and precision, determined by algorithm). RESNET was widely used in the industry before, but today we use a smaller model with more sophisticated design like mobilenet, which can achieve the same accuracy and speed with 1 / 10 of the calculation force. However, these ingenious design algorithms bring huge challenges to the computing architecture, which often make the effective utilization rate of the traditional design of the computing architecture greatly reduced, and from the perspective of the final AI performance, even more than worth the loss. The biggest feature of horizon is to predict the development trend of key algorithms in important application scenarios, and to integrate its computing features into the design of computing architecture prospectively, so that the AI processor can still adapt to the latest mainstream algorithm after one or two years of research and development. Therefore, compared with other typical AI processors, horizon's AI processor, along with the evolution trend of the algorithm, has always been able to maintain a fairly high effective utilization rate, so as to truly benefit from the advantages brought by algorithm innovation. Horizon also optimizes the compiler's instruction sequence. After optimization, the peak effective rights are increased by 85%. This makes the processing speed of the chip increased by 2.5 times, or the power consumption reduced to 40% when processing the same number of tasks. Another feature of horizon BPU is that it can be better integrated with sensors in the field. Video often requires huge bandwidth. 1080p @ 30fps video has a bandwidth of 1.5gbit/s from camera to chip. And horizon BPU can complete the video input, field target detection, tracking and recognition at the same time, so that all necessary work can be completed on site. Both the journey series applied to intelligent driving and the sunrise series applied to the intelligent Internet of things can easily cope with the huge bandwidth and processing capacity of the scene. More importantly, the common AI calculation can be completed in 30ms. It makes the applications that are extremely sensitive to time delay become reality gradually, such as automatic driving, recognition of lane lines, pedestrians, vehicles, obstacles and so on. If the time delay is too large or unpredictable, it will cause accidents.

However, by using sunrise BPU, AI calculation can be completed within predictable time delay, which can make the development of automatic driving more convenient. The application of edge computing has been limited by the performance of computing, the strict limitation of sensors and power consumption since it was put forward, and the development of edge computing is slow. And horizon BPU series chips, seeking a new balance in function and performance, can also effectively help edge computing applications to be more easily deployed to the site, so that all kinds of Internet of things applications can more effectively serve everyone.

credit: https://www.zhihu.com/question/274787680

从组会论文中得到项目的灵感[2]

我们正好要在安全比赛中做一个小项目.同时要再编译原理课上完成静态分析.

本论文是基于深度学习权限API调用的安全漏洞检查.对拓展CFG进行标签化学习,先对良性app学习数据流和api调用的关系,去检测一个新的app的malitious 与否.

最后这个文章的创新点居然只是wigdet 和word的关系.

从组会论文中得到项目的灵感[1]

https://arxiv.org/pdf/1903.04881.pdf

首先是几个概念的辨析(reference : https://blog.csdn.net/program_developer/article/details/79946787 )

一、 ROC曲线的由来

  很多学习器是为测试样本产生一个实值或概率预测,然后将这个预测值与一个分类阈值进行比较,若大于阈值则分为正类,否则为反类。例如,神经网络在一般情形下是对每个测试样本预测出一个[0.0,1.0]之间的实值,然后将这个值与阈值0.5进行比较,大于0.5则判为正例,否则为反例。这个阈值设置的好坏,直接决定了学习器的泛化能力。

  在不同的应用任务中,我们可根据任务需求来采用不同的阈值。例如,若我们更重视“查准率”,则可以把阈值设置的大一些,让分类器的预测结果更有把握;若我们更重视“查全率”,则可以把阈值设置的小一些,让分类器预测出更多的正例。因此,阈值设置的好坏,体现了综合考虑学习器在不同任务下的泛化性能的好坏。为了形象的描述这一变化,在此引入ROC曲线,ROC曲线则是从阈值选取角度出发来研究学习器泛化性能的有力工具。

错误率(Error Rate):是分类错误的样本数占样本总数的比例。对样例集D,分类错误率计算公式如1所示。

\begin{equation} E(f;D)=\frac{1}{m}\sum_{i=1}^{m} \prod (f(x_{i})\neq y_{i})\ \ _{(1)} \end{equation}

对公式(1)解释:统计分类器预测出来的结果与真实结果不相同的个数,然后除以总的样例集D的个数。

精度(Accuracy):是分类正确的样本数占样本总数的比例。对样例集D,精度计算公式如2所示。

\begin{equation}\nonumber acc(f;D) = \frac{1}{m}\sum^{m}_{i=1}\prod (f(x_{i})=y_{i})\\ =1-E(f;D) \ . \ \ \ _{(2)} \end{equation}

注意:这里的分类正确的样本数指的不仅是正例分类正确的个数还有反例分类正确的个数。


(1)查准率、查全率出现的原因:

情景一:

错误率和精度虽然常用,但是并不能满足所有任务需求。以西瓜问题为例,假定瓜农拉来一车西瓜,我们用训练好的模型对这些西瓜进行判别,显然,错误率衡量了有多少比例的瓜被判别错误。但是若我们关心的是“挑出的西瓜中有多少比例是好瓜”,或者“所有好瓜中有多少比例被挑了出来”,那么错误率显然就不够用了,这时需要使用其他的性能度量。

情景二:

类似的需求在信息检索、Web搜索等应用中经常出现,例如在信息检索中,我们经常会关心“检索出的信息中有多少被检索出来了”。

“查准率”与“查全率”是更为适用于此类需求的性能度量。

(2)什么是查准率和查全率

对于二分类问题,可将样例根据其真实类别与学习器预测类别的组合划分为真正例(true positive)、假正例(false positive)、真反例(true negative)、假反例(false negative)四种情形,令TP、FP、TN、FN分别表示其对应的样例数,则显然有TP+FP++TN+FN=样例总数。分类结果的“混淆矩阵”(confusion matrix)如表1所示。

  表1:分类结果混淆矩阵

真实情况预测结果
正例反例
正例TP(真正例)FN(假反例)
反例FP(假正例)TN(真反例)

查准率(Precision),又叫准确率,缩写表示用P。查准率是针对我们预测结果而言的,它表示的是预测为正的样例中有多少是真正的正样例。定义公式如3所示。

 

注意:这里大家有一个容易混淆的误区。精度(Accuracy)和准确率(Precision)表示的是不同的概念,计算方法也不同。所以,大家在看paper的时候,要特别注意这些细节。

精确度(Accuracy),缩写表示用A。精确度则是分类正确的样本数占样本总数的比例。Accuracy反应了分类器对整个样本的判定能力(即能将正的判定为正的,负的判定为负的)。定义公式如4所示。

查全率(Recall),又叫召回率,缩写表示用R。查全率是针对我们原来的样本而言的,它表示的是样本中的正例有多少被预测正确。定义公式如5所示。

注意:大家可以比较一下查准率和查全率的计算公式。其实就是分母不同,查准率的分母是预测为正的样本数。查全率的分母是原样本的所有正样例数。

(3)查准率和查全率之间的矛盾

查准率和查全率是一对矛盾的度量。一般来说,查准率高时,查全率往往偏低;而查全率高时,查准率往往偏低。

思考一个问题:为什么会有这样的情况呢?

答案:我们可以这样理解,在一个分类器中,你想要更高的查准率,那么你的阈值要设置的更高,只有这样才能有较高的把握确定我们预测是正例是真正例。一旦我们把阈值设置高了,那我们预测出正例的样本数就少了,那真正例数就更少了,查不全所有的正样例。

举个例子来理解一下吧!例如,若希望将好瓜尽可能多地挑选出来,则可通过增加选瓜的数量来实现,如果将所有的西瓜都选上,那么所有的好瓜也必然都选上了,但这样查准率就会较低;若希望选出的瓜中好瓜比例尽可能高,则可只挑选最有把握的瓜,但这样就难免会漏掉不少好瓜,使得查全率较低。通常只有在一些简单任务中,才可能使查全率和查准率都很高。

P-R曲线在很多情形下,我们可根据学习器的预测结果对样例进行排序,排在前面的是学习器认为“最可能”是正例的样本,排在最后的是学习器认为“最不可能”是正例的样本。按此顺序设置不同的阈值,逐个把样本作为正例进行预测,则每次可以计算出当前的查准率、查全率。以查准率为纵轴查全率为横轴作图,就得到了查准率-查全率曲线,简称“P-R曲线”,显示该曲线的图称为“P-R图”。

P-R图直观地显示出学习器在样本总体上的查全率、查准率。在进行比较时,若一个学习器的P-R曲线被另一个学习器的曲线完全“包住”,则可断言后者的性能优于前者,例如图1中学习器A的性能优于学习器C;如果两个学习器的P-R曲线发生了交叉,例如图1中的A和B,则难以一般性地断言两者孰优孰劣,只能在具体的查准率或查全率条件下进行比较。然而,在很多情形下,人们往往仍然希望把学习器A与B比出个高低。这时,一个比较合理的判断依据是比较P-R曲线下面积的大小,它在一定程度上表征了学习器在查准率和查全率上取得相对“双高”的比例。但这个值不太容易估算,因此,人们设计了一些综合考虑查准率、查全率的性能度量,比如BEP度量、F1度量。

“平衡点”(Break-Even-Point,简称BEP)就是这样一个度量,它是“查准率=查全率”时的取值,例如图1中学习器C的BEP是0.64,而基于BEP的比较,可认为学习器A优于B。

F1度量BEP曲线还是过于简化了些,更常用的是F1度量。我们先来谈谈F1度量的由来是加权调和平均,计算公式如6所示。

\begin{equation}\nonumber 加权调和平均与算术平均(\frac{P+R}{2})和几何平均\sqrt{PR}相比,调和平均更重视较小值。当β=1,即F1是基于查准率与查全率的调和平均定义的,公式如7所示。 \end{equation}

我们把公式7求倒数,即得F1度量公式,即公式8所示。

在一些应用中,对查准率和查全率的重视程度有所不同。例如在商品推荐系统中,为了尽可能少打扰用户,更希望推荐内容确实是用户感兴趣的,此时查准率更重要;而在逃犯信息检索系统中,更希望尽可能少漏掉逃犯,此时查全率更重要。F1度量的一般形式是,能让我们表达出对查准率/查全率的不同偏好,它定义为公式9所示。

其中,β>0度量了查全率对查准率的相对重要性。β=1时,退化为标准的F1;β>1时查全率有更大影响;β<1时,查准率有更大影响。

二、 什么是ROC曲线

ROC全称是“受试者工作特征”(Receiver OperatingCharacteristic)曲线。我们根据学习器的预测结果,把阈值从0变到最大,即刚开始是把每个样本作为正例进行预测,随着阈值的增大,学习器预测正样例数越来越少,直到最后没有一个样本是正样例。在这一过程中,每次计算出两个重要量的值,分别以它们为横、纵坐标作图,就得到了“ROC曲线”。

  ROC曲线的纵轴是“真正例率”(True Positive Rate, 简称TPR),横轴是“假正例率”(False Positive Rate,简称FPR),基于上篇文章《错误率、精度、查准率、查全率和F1度量》的表1中符号,两者分别定义为:

  显示ROC曲线的图称为“ROC图”。图1给出了一个示意图,显然,对角线对应于“随机猜测”模型,而点(0,1)则对应于将所有正例预测为真正例、所有反例预测为真反例的“理想模型”。

图1:ROC曲线与AUC面积   
  现实任务中通常是利用有限个测试样例来绘制ROC图,此时仅能获得有限个(真正例率,假正例率)坐标对,无法产生图1中的光滑ROC曲线,只能绘制出图2所示的近似ROC曲线。绘制过程很简单:给定个正例和个反例,根据学习器预测结果对样例进行排序,然后把分类阈值设置为最大,即把所有样例均预测为反例,此时真正例率和假正例率均为0,在坐标(0,0)处标记一个点。然后,将分类阈值依次设为每个样例的预测值,即依次将每个样例划分为正例。设前一个标记点坐标为,当前若为真正例,则对应标记点的坐标为;当前若为假正例,则对应标记点的坐标为,然后用线段连接相邻点即得。

三、 ROC曲线的意义

(1)主要作用

1.ROC曲线能很容易的查出任意阈值对学习器的泛化性能影响。

2.有助于选择最佳的阈值。ROC曲线越靠近左上角,模型的查全率就越高。最靠近左上角的ROC曲线上的点是分类错误最少的最好阈值,其假正例和假反例总数最少。

3.可以对不同的学习器比较性能。将各个学习器的ROC曲线绘制到同一坐标中,直观地鉴别优劣,靠近左上角的ROC曲所代表的学习器准确性最高。

(2)优点

  1. 该方法简单、直观、通过图示可观察分析方法的准确性,并可用肉眼作出判断。ROC曲线将真正例率和假正例率以图示方法结合在一起,可准确反映某种学习器真正例率和假正例率的关系,是检测准确性的综合代表。
  2. 在生物信息学上的优点:ROC曲线不固定阈值,允许中间状态的存在,利于使用者结合专业知识,权衡漏诊与误诊的影响,选择一个更加的阈值作为诊断参考值。

四、 AUC面积的由来

  如果两条ROC曲线没有相交,我们可以根据哪条曲线最靠近左上角哪条曲线代表的学习器性能就最好。但是,实际任务中,情况很复杂,如果两条ROC曲线发生了交叉,则很难一般性地断言谁优谁劣。在很多实际应用中,我们往往希望把学习器性能分出个高低来。在此引入AUC面积。

  在进行学习器的比较时,若一个学习器的ROC曲线被另一个学习器的曲线完全“包住”,则可断言后者的性能优于前者;若两个学习器的ROC曲线发生交叉,则难以一般性的断言两者孰优孰劣。此时如果一定要进行比较,则比较合理的判断依据是比较ROC曲线下的面积,即AUC(Area Under ROC Curve),如图1图2所示。

五、 什么是AUC面积

  AUC就是ROC曲线下的面积,衡量学习器优劣的一种性能指标。从定义可知,AUC可通过对ROC曲线下各部分的面积求和而得。假定ROC曲线是由坐标为的点按序连接而形成,参见图2,则AUC可估算为公式3。

六、 AUC面积的意义

  AUC是衡量二分类模型优劣的一种评价指标,表示预测的正例排在负例前面的概率。

  看到这里,是不是很疑惑,根据AUC定义和计算方法,怎么和预测的正例排在负例前面的概率扯上联系呢?如果从定义和计算方法来理解AUC的含义,比较困难,实际上AUC和Mann-WhitneyU test(曼-慧特尼U检验)有密切的联系。从Mann-Whitney U statistic的角度来解释,AUC就是从所有正样本中随机选择一个样本,从所有负样本中随机选择一个样本,然后根据你的学习器对两个随机样本进行预测,把正样本预测为正例的概率,把负样本预测为正例的概率,>的概率就等于AUC。所以AUC反映的是分类器对样本的排序能力。根据这个解释,如果我们完全随机的对样本分类,那么AUC应该接近0.5。

  另外值得注意的是,AUC的计算方法同时考虑了学习器对于正例和负例的分类能力,在样本不平衡的情况下,依然能够对分类器做出合理的评价。AUC对样本类别是否均衡并不敏感,这也是不均衡样本通常用AUC评价学习器性能的一个原因。例如在癌症预测的场景中,假设没有患癌症的样本为正例,患癌症样本为负例,负例占比很少(大概0.1%),如果使用准确率评估,把所有的样本预测为正例便可以获得99.9%的准确率。但是如果使用AUC,把所有样本预测为正例,TPR为1,FPR为1。这种情况下学习器的AUC值将等于0.5,成功规避了样本不均衡带来的问题。


由组会引发的顶会浏览

组会开的是这篇文章看到今年的 MICRO2019 华为仍是最大赞助商之一,我就放心了,学术界是没有国界的这句话在绝大多数情况下都是使用的。

来看看这篇论文,来自UIUC的Mengjia Yan,会议在日本(有点想去)。

首先是预测,cpu需要通过既有个个进程的资源占用率来预测未来分配,以减少不当分派带来的 fairness between threads & security maliciousness 。同时,有些线程( OS 上多了)或程序跑到某一时间段后从安全态到了非安全态或切换状态的时候会影响调度结果,所以 CPU scheduler 也会预测各个分支。现在工业界主要的方法是启发式的预测方法,这比 OS 课上习得的 FCFS、SJF、SRFS 以及古老的RR要性能好。

文章中提到了一个幽灵漏洞的 naïve version 的攻击方式,叫 speculative execution attacks 。假设 Cache will point to the place in drive (这里的 Cache 上不一定存数据,可能是映射到另外一个地址空间),那就提出一个新的 cache 区域作为 buffer ,经过 LLC 后放在 Cache 上。读入一遍后,只要看速度就能得出漏洞可能存在的位置。

P.S.赵博提出一个问题:读的时候数据会擦除下一位的数据留在 cache ,实际上是由 LLC 部分编程去除解决的。但因为是比较 naive 的算法的,大不了扫100次,再比较之。

one ln problem in cuda10.1 /dso_loader.cc:55] Could not load dynamic library 'libcusparse.so.10.0'

I faced a linking problem in cuda 10.0 is neither 10.1 nor 10, so we have to change it. Also cublas is not in the lib64 directory but in gnu directory in 10.1.

The turorial I follow is https://www.tensorflow.org/install/gpu

2019-08-12 06:57:41.705482: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
 2019-08-12 06:57:41.707082: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: 
 name: GeForce RTX 2070 with Max-Q Design major: 7 minor: 5 memoryClockRate(GHz): 1.185
 pciBusID: 0000:01:00.0
 2019-08-12 06:57:41.707394: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcudart.so.10.0'; dlerror: libcudart.so.10.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: :/usr/local/cuda/lib:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64
 2019-08-12 06:57:41.707637: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcublas.so.10.0'; dlerror: libcublas.so.10.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: :/usr/local/cuda/lib:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64
 2019-08-12 06:57:41.707908: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcufft.so.10.0'; dlerror: libcufft.so.10.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: :/usr/local/cuda/lib:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64
 2019-08-12 06:57:41.708132: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcurand.so.10.0'; dlerror: libcurand.so.10.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: :/usr/local/cuda/lib:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64
 2019-08-12 06:57:41.708355: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcusolver.so.10.0'; dlerror: libcusolver.so.10.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: :/usr/local/cuda/lib:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64
 2019-08-12 06:57:41.708580: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcusparse.so.10.0'; dlerror: libcusparse.so.10.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: :/usr/local/cuda/lib:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64
 2019-08-12 06:57:41.708639: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
 2019-08-12 06:57:41.708665: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1641] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
 Skipping registering GPU devices…
 2019-08-12 06:57:41.709682: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:
 2019-08-12 06:57:41.709692: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165]      
cd  /usr/local/cuda/lib64 && sudo ln lib*.so.10 lib*.so.10.0
cd /usr/lib/x86_64-linux-gnu && sudo ln libcublas.so.10 /usr/local/cuda/lib64/libcublas.so.10.0

again you can re start your python console and do tensorflow-gpu again

python -c import tensorflow as tf; print("GPU Available: ", tf.test.is_gpu_available())

Adversarial AI 学习日记


产生了一点没有用的垃圾。就是按照NTU学长的来一遍。

PS C:\Users\AERO> docker attach mynginx^C
PS C:\Users\AERO> docker attach dgl2019/icse2019-artifacts
Error: No such container: dgl2019/icse2019-artifacts 
PS C:\Users\AERO> docker attach docker.io/dgl2019/icse2019-artifacts 
Error: No such container: docker.io/dgl2019/icse2019-artifacts 
PS C:\Users\AERO> docker ps 
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 
PS C:\Users\AERO> docker pull dgl2019/icse2019-artifacts 
Using default tag: latest 
latest: Pulling from dgl2019/icse2019-artifacts 
Digest: sha256:ddf6ceb380481b67485b18728f302958113569ea8571b9fcd78439724eeaaef8 
Status: Image is up to date for dgl2019/icse2019-artifacts:latest docker.io/dgl2019/icse2019-artifacts:latest 
PS C:\Users\AERO> docker ps 
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 
PS C:\Users\AERO> docker images 
REPOSITORY TAG IMAGE ID CREATED SIZE 
dgl2019/icse2019-artifacts latest a5a18674d9a4 6 months ago 6.43GB 
PS C:\Users\AERO> docker exec -it a5a18674d9a4 /bin/bash 
Error: No such container: a5a18674d9a4 
PS C:\Users\AERO> docker exec -it dgl2019/icse2019-artifacts /bin/bash 
Error: No such container: dgl2019/icse2019-artifacts
PS C:\Users\AERO> docker exec -it dgl2019 /bin/bash 
Error: No such container: dgl2019 
PS C:\Users\AERO> docker run -it dgl2019/icse2019-artifacts /bin/bash 
root@ef79040f7919:/# ls 
bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
root@ef79040f7919:/# cd home/
root@ef79040f7919:/home# ls
icse2019
root@ef79040f7919:/home# cd icse2019/
root@ef79040f7919:/home/icse2019# ls
source
root@ef79040f7919:/home/icse2019# cd source/ root@ef79040f7919:/home/icse2019/source# ls
__init__.py attacks build-in-resource config detect lcr_auc model_mutation models scripts utils
root@ef79040f7919:/home/icse2019/source# cd config/
root@ef79040f7919:/home/icse2019/source/config# ls
logging.yaml
root@ef79040f7919:/home/icse2019/source/config# cd ..
root@ef79040f7919:/home/icse2019/source# cd model
bash: cd: model: No such file or directory
root@ef79040f7919:/home/icse2019/source# ls
__init__.py attacks build-in-resource config detect lcr_auc model_mutation models scripts utils
root@ef79040f7919:/home/icse2019/source# cd models
root@ef79040f7919:/home/icse2019/source/models# ls
__init__.py __init__.pyc ensemble_model.py ensemble_model.pyc googlenet.py googlenet.pyc lenet.py lenet.pyc
root@ef79040f7919:/home/icse2019/source/models# python lenet.py root@ef79040f7919:/home/icse2019/source/models# uname -r
4.9.184-linuxkit
root@ef79040f7919:/home/icse2019/source/models# uname -a Linux ef79040f7919 4.9.184-linuxkit #1 SMP Tue Jul 2 22:58:16 UTC 2019 x86_64 GNU/Linux
root@ef79040f7919:/home/icse2019/source/models# ls __init__.py __init__.pyc ensemble_model.py ensemble_model.pyc googlenet.py googlenet.pyc lenet.py lenet.pyc root@ef79040f7919:/home/icse2019/source/models# cd .. root@ef79040f7919:/home/icse2019/source# ls
__init__.py attacks build-in-resource config detect lcr_auc model_mutation models scripts utils
root@ef79040f7919:/home/icse2019/source# cd utils/ root@ef79040f7919:/home/icse2019/source/utils# ls
__init__.py data_manger.pyc logging_util.pyc model_trainer.py pytorch_extend.pyc
__init__.pyc imgnet12-valprep.sh model_manager.py model_trainer.pyc time_util.py
data_manger.py logging_util.py model_manager.pyc pytorch_extend.py time_util.pyc
root@ef79040f7919:/home/icse2019/source/utils# ls
__init__.py data_manger.pyc logging_util.pyc model_trainer.py pytorch_extend.pyc
__init__.pyc imgnet12-valprep.sh model_manager.py model_trainer.pyc time_util.py
data_manger.py logging_util.py model_manager.pyc pytorch_extend.py time_util.pyc
root@ef79040f7919:/home/icse2019/source/utils# cd .. root@ef79040f7919:/home/icse2019/source# cd scripts/ root@ef79040f7919:/home/icse2019/source/scripts# ./craftAdvSamples.sh
NOTE: Our experiments are only based on two datasets: mnist and cifar10,
but it is a piece of cake to extend to other datasets only providing a proper pytorch-style data loader tailored to himself datasets. Each attack manner has different parameters. All the parameters are organized in a list.The order of the parameters can be found in the REDME in this folder. To quickly yield adversarial samples, we provide a default setting for each attack manner.Do you want to perform an attack with the default settings?y/n y dataType ( [0] mnist; [1] cifar10):1 attackType:fgsm =======>Please Check Parameters<======= modelName: googlenet modelPath: ../build-in-resource/pretrained-model/googlenet.pkl dataType: 1 sourceDataPath: ../build-in-resource/dataset/cifar10/raw attackType: fgsm attackParameters: 0.03,true savePath: ../artifacts_eval/adv_samples/cifar10/fgsm device: -1 <======>Parameters=======> Press any key to start attack process CTRL+C break command bash... Crafting Adversarial Samples.... targeted model: Average loss: -11.4510, Accuracy: 9049/10000 (90.49%) ./craftAdvSamples.sh: line 129: 34 Killed python -u $exe_file --modelName ${modelName} --modelPath ${modelPath} --dataType ${dataType} --sourceDataPath ${sourceDataPath} --attackType ${attackType} --attackParameters ${attackParameters} --savePath ${savePath} --device ${device} DONE! root@ef79040f7919:/home/icse2019/source/scripts# nvidia-smi bash: nvidia-smi: command not found root@ef79040f7919:/home/icse2019/source/scripts# ks bash: ks: command not found root@ef79040f7919:/home/icse2019/source/scripts# lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 2 On-line CPU(s) list: 0,1 Thread(s) per core: 2 Core(s) per socket: 1 Socket(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 158 Model name: Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz Stepping: 10 CPU MHz: 2362.464 BogoMIPS: 4724.92 Hypervisor vendor: Microsoft Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 12288K root@ef79040f7919:/home/icse2019/source/scripts# lspci bash: lspci: command not found root@ef79040f7919:/home/icse2019/source/scripts# dmseg bash: dmseg: command not found root@ef79040f7919:/home/icse2019/source/scripts# dmesg dmesg: read kernel buffer failed: Operation not permitted root@ef79040f7919:/home/icse2019/source/scripts# sudo dmesg bash: sudo: command not found root@ef79040f7919:/home/icse2019/source/scripts# ^Cdo dmesg root@ef79040f7919:/home/icse2019/source/scripts# ./advSampelsVerify.sh 0 ../artifacts_eval/adv_samples/mnist/fgsm/2019-01-13_03:48:45 -1 1 Traceback (most recent call last): File "../attacks/attack_util.py", line 369, in test_adv_samples() File "../attacks/attack_util.py", line 318, in test_adv_samples ]), show_file_name=True, img_mode=img_mode, max_size=10000) File "../utils/data_manger.py", line 240, in __init__ all_files = np.array([img_file for img_file in os.listdir(root)]) OSError: [Errno 2] No such file or directory: '../artifacts_eval/adv_samples/mnist/fgsm/2019-01-13_03:48:45' root@ef79040f7919:/home/icse2019/source/scripts# ls advSampelsVerify.sh craftAdvSamples.sh default_cifar10_auc_analysis.sh default_mnist_auc_analysis.sh detect.sh lcr_acu_analysis.sh modelMuated.sh root@ef79040f7919:/home/icse2019/source/scripts# ./advSampelsVerify.sh 0 ../a artifacts_eval/ attacks/ root@ef79040f7919:/home/icse2019/source/scripts# ./advSampelsVerify.sh 0 ../artifacts_eval/adv_samples/cifar10/fgsm/2019-08-07_00\:04\:11/ -1 1 Total:0,Success:0 root@ef79040f7919:/home/icse2019/source/scripts# ./craftAdvSamples.sh NOTE: Our experiments are only based on two datasets: mnist and cifar10, but it is a piece of cake to extend to other datasets only providing a proper pytorch-style data loader tailored to himself datasets. Each attack manner has different parameters. All the parameters are organized in a list.The order of the parameters can be found in the REDME in this folder. To quickly yield adversarial samples, we provide a default setting for each attack manner.Do you want to perform
an attack with the default settings?y/n
y
dataType ( [0] mnist; [1] cifar10):0
attackType:fgsm
=======>Please Check Parameters<======= modelName: lenet modelPath: ../build-in-resource/pretrained-model/lenet.pkl dataType: 0 sourceDataPath: ../build-in-resource/dataset/mnist/raw attackType: fgsm attackParameters: 0.35,true savePath: ../artifacts_eval/adv_samples/mnist/fgsm device: -1 <======>Parameters=======>
Press any key to start attack process
CTRL+C break command bash...
Crafting Adversarial Samples....

targeted model: Average loss: -12.9422, Accuracy: 9829/10000 (98.29%)
y
Eps=0.35: Average loss: -7.5184, Accuracy: 7775/9829 (79.10%)
successful samples 2054
Done!
icse19-eval-attack-fgsm: rename 125, remove 40,success 1889
Adversarial samples are saved in ../artifacts_eval/adv_samples/mnist/fgsm/2019-08-07_00:58:21
DONE!
root@ef79040f7919:/home/icse2019/source/scripts# /modelMuated.sh
bash: /modelMuated.sh: No such file or directory
root@ef79040f7919:/home/icse2019/source/scripts# 。/modelMuated.sh bash: 。/modelMuated.sh: No such file or directory
root@ef79040f7919:/home/icse2019/source/scripts# ./modelMuated.sh NOTE: Our experiments are only based on two datasets: mnist and cifar10,

but it is a piece of cake to extend to other datasets only providing a

proper pytorch-style data loader tailored to himself datasets.
To quickly verify the mutation process, we provide a group of default parameters,do you want to quickly start the
program?y/n
y
=======>Parameters<======= modelName: lenet modelPath: ../build-in-resource/pretrained-model/lenet.pkl accRation: 0.9 dataType: 0 numMModels: 10 mutatedRation: 0.001 opType: GF savePath: ../artifacts_eval/modelMuation/ device: -1 <======>Parameters=======>
Press any key to start mutation process
CTRL+C break command bash...
2019-08-07 00:59:08,632 - INFO - data type:mnist
2019-08-07 00:59:08,637 - INFO - >>>>>>>>>>>>Start-new-experiment>>>>>>>>>>>>>>>>
2019-08-07 00:59:10,078 - INFO - orginal model acc=0.9829
2019-08-07 00:59:10,079 - INFO - acc_threshold:88.0%
2019-08-07 00:59:10,079 - INFO - seed_md_name:lenet,op_type:GF,ration:0.001,acc_tolerant:0.9,num_mutated:10
2019-08-07 00:59:10,091 - INFO - 61/61706 weights to be fuzzed
2019-08-07 00:59:12,375 - INFO - Mutated model: accurate 0.9818
2019-08-07 00:59:12,379 - INFO - Progress:1/10
2019-08-07 00:59:12,388 - INFO - 61/61706 weights to be fuzzed
2019-08-07 00:59:13,678 - INFO - Mutated model: accurate 0.9832
2019-08-07 00:59:13,682 - INFO - Progress:2/10
2019-08-07 00:59:13,691 - INFO - 61/61706 weights to be fuzzed
2019-08-07 00:59:14,958 - INFO - Mutated model: accurate 0.9823
2019-08-07 00:59:14,960 - INFO - Progress:3/10
2019-08-07 00:59:14,971 - INFO - 61/61706 weights to be fuzzed
2019-08-07 00:59:16,271 - INFO - Mutated model: accurate 0.9827
2019-08-07 00:59:16,274 - INFO - Progress:4/10
2019-08-07 00:59:16,283 - INFO - 61/61706 weights to be fuzzed
2019-08-07 00:59:17,582 - INFO - Mutated model: accurate 0.9829
2019-08-07 00:59:17,586 - INFO - Progress:5/10
2019-08-07 00:59:17,595 - INFO - 61/61706 weights to be fuzzed
2019-08-07 00:59:18,764 - INFO - Mutated model: accurate 0.9829
2019-08-07 00:59:18,767 - INFO - Progress:6/10
2019-08-07 00:59:18,777 - INFO - 61/61706 weights to be fuzzed
2019-08-07 00:59:19,921 - INFO - Mutated model: accurate 0.982
2019-08-07 00:59:19,923 - INFO - Progress:7/10
2019-08-07 00:59:19,932 - INFO - 61/61706 weights to be fuzzed
2019-08-07 00:59:21,072 - INFO - Mutated model: accurate 0.9823
2019-08-07 00:59:21,075 - INFO - Progress:8/10
2019-08-07 00:59:21,086 - INFO - 61/61706 weights to be fuzzed
2019-08-07 00:59:22,262 - INFO - Mutated model: accurate 0.983
2019-08-07 00:59:22,264 - INFO - Progress:9/10
2019-08-07 00:59:22,274 - INFO - 61/61706 weights to be fuzzed
2019-08-07 00:59:23,442 - INFO - Mutated model: accurate 0.9824
2019-08-07 00:59:23,444 - INFO - Progress:10/10
The mutated models are stored in ../artifacts_eval/modelMuation/2019-08-07_00:59:08/gf0.001/lenet
root@ef79040f7919:/home/icse2019/source/scripts# ../build-in-resource/nr-lcr/mnsit/lenet/gf/5e-2p/nrLCR.npy../build-in-resource/nr-lcr/mnsit/lenet/gf/5e-2p/nrLCR.npy^C
root@ef79040f7919:/home/icse2019/source/scripts# ../build-in-resource/nr-lcr/mnsit/lenet/gf/5e-2p/nrLCR.npy
bash: ../build-in-resource/nr-lcr/mnsit/lenet/gf/5e-2p/nrLCR.npy: Permission denied
root@ef79040f7919:/home/icse2019/source/scripts# sudo ../build-in-resource/nr-lcr/mnsit/lenet/gf/5e-2p/nrLCR.npy
bash: sudo: command not found
root@ef79040f7919:/home/icse2019/source/scripts# chmod 777 ../build-in-resource/nr-lcr/mnsit/lenet/gf/5e-2p/nrLCR.npy
root@ef79040f7919:/home/icse2019/source/scripts# ../build-in-resource/nr-lcr/mnsit/lenet/gf/5e-2p/nrLCR.npy bash: ../build-in-resource/nr-lcr/mnsit/lenet/gf/5e-2p/nrLCR.npy: cannot execute binary file: Exec format error
root@ef79040f7919:/home/icse2019/source/scripts# cat ../build-in-resource/nr-lcr/mnsit/lenet/gf/5e-2p/nrLCR.npy
�NUMPY v {'descr': '<f8', 'fortran_order':="" false,="" 'shape':="" (1000,),="" }="" j�t�x?�i="" +��?�i="" +��?="" ���mb�?="" ���mbp?="" ���mb�?��&��?="" �="" ���mb`?="" ���mb�?��~j�t�?="" ���mbp?��g�z�?="" j�t�x?="" j�t="" ��?="" ���mbp?@�o��n�?h��|?5�?��g�z�?="" v-��?��g�z�?="" ���mb`?8�o��n�?="" ��mb`?="" @�o��n�?="" ���mbp?8��v���?="" �?="" ��="" j�t�x?�������?��g�z�?="" ���mb`?��g�z�?="" ���mbj�t�x?="" j�t��?="" ����mb�?l7�a`��?="" ���x�&�?="" ���s="" �?@�o��n�?="" ��ʡe��?="" ���mb�?����s�?="" ���mb`?�������?="" j�t�x���mb�?="" ��&��?="" ���mb�?@�o��n�?="" q����?="" @��v���?="" b`?="" ���mb`?����mb�?��&��?="" ���mb`?��&��?="" ���mb`?���(\="" bp?="" ���mb`?�i="" �l�����?="" j�&1�?="" j�t�x?��&��?="" ���mb`?�zd�?�ʡe���?="" ����mb`?="" 8��v���?="" ���mb`?��~j�t�?="" j�t�x?x9��v��?`��"���?�x�&��?="" ���mb`?��q��?��&��?="" ���mb�?��g�z�?="" �t��?="" j�t��?�l�����?="" ���mbp?�l�����?="" ?="" ���q��?="" ��b�?���s㥫?�i="" ���mbp?"��~j��?="" j�t��?��&��?="" ���mbp?��&��?="" jp?="" ��g�z�?="" `9��v��?="" ���mb�?�l�����?x9��v��?="" ���mbp���mb`?="" ���mb�?��g�z�?@�o��n�?="" j�t�x?zd;�o��?="" �"��~j�?="" j�tb`?="" ���mb�?�i="" +��?@��v���?="" ���mb�?��g�z�?�v��="" ���mb�?~j�t��?="" ���mb`?���s��?8��v���?="" +��?��g�z�?�q�����mb`?="" ���mb`?�v-�?="" ���mbp?8�o��n�b`?l7�a`��?="" ��c�l��?="" j�t��?�a�g�z�?���q��?="" j�t�x?�~j�t��?��&��?="" j�t�x?="" ��g�z�?�v��="" ���mb�?أp="��?" �i="" ���mbp?ȡe����?="" ��&��?@��v���?="" v-��?@��v���?="" j�t�x?�z�g��?="" ���mbp?��~j�t�?="" ���mb`?@�o��n�?="" ���mbp?䥛�="" ��g="" �q����?="" v-��?="" j�t�x?@��v���?="" ��g�z�b`?="" ���mb`?��="" �rh�?="" j�t��?��&��?��ʡe��?="" ��p?="" �����k�?="" ���mb�?="" @��v�="" -��?="" ���mb`?h�t��?��&��?�i="" +��?��&��?�������?��g�z�?="" mb`?="" 8�o��n�?="" ���mb`?̡e����?="" +��?��g�z�?="" ���mbp?@�o��n�?="" ��g�z�?�i="" `?��&��?��&��?="" `d;�o��?="" ���mbp?&�z�?="" @`��"��?="" +��?�"��~j�?="" j�t��?@�o��n�?mb`?="" ����mb�?�"��~j�?="" 433333�?="" ���mb`?�|?5^��?��&��?="" @�o��n�?�&��?="" ���m���mb`?�v��="" ���mb`?�~j�t��?="" l7�a`��?="" ���mbp?���x�&�?="" b`?�i="" +��?���q��?��c�l�?="" �������?����mb�?="" ���mb`?root@ef79040f7919:="" home="" icse2019="" source="" scripts#="" root@ef79040f7919:="" .="" lcr_acu_analysis.sh="" note:="" our="" experiments="" are="" only="" based="" on="" two="" datasets:="" mnist="" and="" cifar10,="" but="" it="" is="" a="" piece="" of="" cake="" to="" extend="" other="" datasets="" providing="" proper="" pytorch-style="" data="" loader="" tailored="" himself="" datasets.="" quickly="" label="" change="" rate="" auc="" statistics="" ,="" we="" provide="" group="" default="" parameters,do="" you="" want="" start="" the="" program?y="" n="" y="" please="" lcr="" result="" normal="" samples="" for="" computinglease="" test.="" do="" have="" results="" samples?(y="" n)y="" path="" normal's="" list:..="" build-in-resource="" nr-lcr="" mnsit="" lenet="" gf="" 5e-2p="" nrlcr.npy="======">Please Check Parameters<======= dataType: mnist device: -1 testType: adv useTrainData: False batchModelSize: 2 maxModelsUsed: 10 mutatedModelsPath: ../build-in-resource/mutated_models/mnist/lenet/gf/5e-2p/ testSamplesPath: ../build-in-resource/dataset/mnist/adversarial/jsma/ seedModelName: lenet test_result_folder: ../lcr_auc-testing-results/mnist/lenet/gf/5e-2p/jsma/ The test will be divided into 5 batches The logs will be saved in: ../lcr_auc-testing-results/mnist/lenet/gf/5e-2p/jsma/-2019-08-07-01 is_adv: True nrLcrPath: ../build-in-resource/nr-lcr/mnsit/lenet/gf/5e-2p/nrLCR.npy <======>Parameters=======>
Press any key to start mutation process
CTRL+C break command bash...
batch:1
model_start_no:1
batch:2
model_start_no:3
batch:3
model_start_no:5
batch:4
model_start_no:7
batch:5
model_start_no:9
Testing Done!
>>>>>>>>>>>seed data:mnist,mutated_models:../build-in-resource/mutated_models/mnist/lenet/gf/5e-2p/<<<<<<<<<< >>>>>>>>>>>mnist<<<<<<<<<<<<<< Total Samples Used:1000,auc:0.9871,avg_lcr:0.5032,std:0.1693,confidence(95%):0.0105,confidence(98%):0.0125,confidence(99%):0.0138 root@ef79040f7919:/home/icse2019/source/scripts# ./detect.sh NOTE: Our experiments are only based on two datasets: mnist and cifar10, but it is a piece of cake to extend to other datasets only providing a proper pytorch-style data loader tailored to himself datasets. To quickly perform adversarial detection, we provide a group of default parameters,do you want to quickly start the program?y/n y ./detect.sh: line 44: $'\n\nthrehold threhold\nextendScale extendScale\nrelaxScale relaxScale\nmutatedModelsPath mutatedModelsPath\nalpha alpha\nbeta beta\ntestSamplesPath testSamplesPath\ndataType dataType\ntestType testType\nseedModelPath seedModelPath\n\n': command not found =======>Please Check Parameters<======= threhold: 0.0441 extendScale: 1.0 relaxScale: 0.1 mutatedModelsPath: ../build-in-resource/mutated_models/mnist/lenet/nai/5e-2p/ alpha: 0.05 beta: 0.05 testSamplesPath: ../build-in-resource/dataset/mnist/adversarial/jsma/ dataType: 0 testType: adv seedModelPath: ../build-in-resource/pretrained-model/lenet.pkl mutatedModelsPath: ../build-in-resource/mutated_models/mnist/lenet/nai/5e-2p/ device: -1 <======>Parameters=======>
Press any key to start mutation process
CTRL+C break command bash...</f8',>

Processed:100.00 %adverage accuracy:0.998, avgerage mutated used:35.538
root@ef79040f7919:/home/icse2019/source/scripts

中午去就是看paper,看到自闭,看完做自己的事情了,没管其他。
可笑的是我只看了5篇就看不下去了。
和学长聊天也算知道了现在的学术圈现状。
简单的东西划水就比较多。
加油吧