In the resource allocation problem in the Distributed Systems under the High Performance Computer, we don't really know which device like disk, NIC (network interface) is more likely to be worn, or not currently on duty which may trigger delaying a while to get the data ready. The current solution is random or round robin scheduling algorithm in avoidance of wearing and dynamic routing for fastest speed. We can utilize the data collected to make it automatic.
Matured system administrator may know the pattern of the parameter to tweak like stride on the distributed File Systems, network MTUs for Infiniband card and the route to fetch the data. Currently, eBPF(extended Berkeley Packets Filter) can store those information like the IO latency on the storage node, network latency over the topology into the time series data. We can use these data to predict which topology and stride and other parameter may be the best way to seek data.
The data is online, and the prediction function can be online reinforce learning. Just like k-arm bandit, the reward can be the function of latency gains and device wearing parameter. The update data can be the real time latency for disks and networks. The information that gives to the RL bots can be where the data locate on disks, which data sought more frequently (DBMS query or random small files) and what frequency the disk make fail.
Benchmarks and evaluation can be the statistical gain of our systems latency and the overall disk wearing after the stress tests.
上周、上上上周和上上上上周,我们迎来了组里的讲座,一个是新加坡国立做Software Analysis的Prof Visit。一个是用DTMC来解释音频的项目。The author XIAONING DU is now in Sydney Tech。另一个是在交大读的研究生,NTU 读的两面本和博。 The author Hongxu Chen
I would say that none of a boy in the great university with a decent research background will be a median people. For a people who get NRF in singapore after just getting his Ph.D degree (equivalent to the QianRen Proposal in China), he's just great. Talking a little bit of his CV.
He graduated from IIT Mumbai and still has great research relation with IIT Kunpur. Both of the cities I have been to. I would say most of the people in IIT is super intelligent, but for lack of money, they tend to focus on the theory and mathematical proof. No exception of him and the guy I met in CRVF-2019.
What's his strength? I think he thinks things really fast and directly to the essence. For the SAT-solver part, with the pysuedo-code, he could quickly come up with the useful testcase to test the usability. I think great man should be equipped with insight. He obviously has it. For the EEsolver, a new and fast solver to find negative false bugs in programs. He insists to test the uniformity of the benchmarks. Proof is not just the benchmarks. To find the algorithm inside, we should
I'm always have an eye on what's edge computing's going on because I'm a fan in IoT. Honestly, I start doing my CS major with IoT projects though it was very dump.(listed is not my first dump project hhhhh)
I've been upon thinking the idea for a while, the idea of the problem is very similar to the state of art key problems.
1. Computing power: data processing equipment is no longer a rack server, how to ensure that the performance meets the requirements
2. Power consumption: power consumption cannot be as large as the level that ordinary civil power is difficult to accept, and power consumption also means that the heat is large
3. Stability: deployment outside causes the difficulty of field maintenance to increase dramatically The improvement of stability also means the reduction of maintenance cost, which also includes the harsh environment on the user side, such as high temperature, humidity, corrosive gas, etc.
4. Cost: only the cost can cover the demand, can we deploy and meet the customer demand as much as possible, if the cost is not comparable to the network + data center, it is meaningless
Moore's law has met with a bottleneck. It is more and more difficult to make the best of both general and specific optimizations. At this time, the hardware coprocessor which integrates common AI algorithms directly in edge computing becomes the key to obtain high performance and low power consumption. A key threshold for power consumption is 6W TDP. Generally, in the design, the power consumption of the chip is less than 6W, and the fan can not be used with the heat sink. The absence of fans not only means the reduction of noise, but also means that the stability and maintainability are not affected by fan damage. In the front-end chip of edge computing class, horizon based on its self-developed computer architecture BPU has found a new balance point in various requirements. The equivalent calculation power of 4 tops provided by it has reached the calculation power of the top GPU two years ago, while the typical power consumption is only 2W, which means that not only fans are not needed, but also the whole machine can be installed in the metal case to avoid dust and corrosion caused by redundant holes.
When it comes to computing power, there is a big misunderstanding in the current industry, which often takes the peak computing power as the main index to measure the AI chip. But what we really need is the effective computing power and the algorithm performance of its output. This needs to be measured from four dimensions: the peak computing power per watt and the peak computing power per dollar (determined by chip architecture, front and rear end design and chip technology), the effective utilization rate of peak computing power (determined by algorithm and chip architecture), and the ratio of effective computing power to AI performance (mainly in terms of speed and precision, determined by algorithm). RESNET was widely used in the industry before, but today we use a smaller model with more sophisticated design like mobilenet, which can achieve the same accuracy and speed with 1 / 10 of the calculation force. However, these ingenious design algorithms bring huge challenges to the computing architecture, which often make the effective utilization rate of the traditional design of the computing architecture greatly reduced, and from the perspective of the final AI performance, even more than worth the loss. The biggest feature of horizon is to predict the development trend of key algorithms in important application scenarios, and to integrate its computing features into the design of computing architecture prospectively, so that the AI processor can still adapt to the latest mainstream algorithm after one or two years of research and development. Therefore, compared with other typical AI processors, horizon's AI processor, along with the evolution trend of the algorithm, has always been able to maintain a fairly high effective utilization rate, so as to truly benefit from the advantages brought by algorithm innovation. Horizon also optimizes the compiler's instruction sequence. After optimization, the peak effective rights are increased by 85%. This makes the processing speed of the chip increased by 2.5 times, or the power consumption reduced to 40% when processing the same number of tasks. Another feature of horizon BPU is that it can be better integrated with sensors in the field. Video often requires huge bandwidth. 1080p @ 30fps video has a bandwidth of 1.5gbit/s from camera to chip. And horizon BPU can complete the video input, field target detection, tracking and recognition at the same time, so that all necessary work can be completed on site. Both the journey series applied to intelligent driving and the sunrise series applied to the intelligent Internet of things can easily cope with the huge bandwidth and processing capacity of the scene. More importantly, the common AI calculation can be completed in 30ms. It makes the applications that are extremely sensitive to time delay become reality gradually, such as automatic driving, recognition of lane lines, pedestrians, vehicles, obstacles and so on. If the time delay is too large or unpredictable, it will cause accidents.
However, by using sunrise BPU, AI calculation can be completed within predictable time delay, which can make the development of automatic driving more convenient. The application of edge computing has been limited by the performance of computing, the strict limitation of sensors and power consumption since it was put forward, and the development of edge computing is slow. And horizon BPU series chips, seeking a new balance in function and performance, can also effectively help edge computing applications to be more easily deployed to the site, so that all kinds of Internet of things applications can more effectively serve everyone.
在进行学习器的比较时,若一个学习器的ROC曲线被另一个学习器的曲线完全“包住”,则可断言后者的性能优于前者;若两个学习器的ROC曲线发生交叉,则难以一般性的断言两者孰优孰劣。此时如果一定要进行比较,则比较合理的判断依据是比较ROC曲线下的面积,即AUC(Area Under ROC Curve),如图1图2所示。
看到这里,是不是很疑惑,根据AUC定义和计算方法,怎么和预测的正例排在负例前面的概率扯上联系呢?如果从定义和计算方法来理解AUC的含义,比较困难,实际上AUC和Mann-WhitneyU test(曼-慧特尼U检验)有密切的联系。从Mann-Whitney U statistic的角度来解释,AUC就是从所有正样本中随机选择一个样本,从所有负样本中随机选择一个样本,然后根据你的学习器对两个随机样本进行预测,把正样本预测为正例的概率,把负样本预测为正例的概率,>的概率就等于AUC。所以AUC反映的是分类器对样本的排序能力。根据这个解释,如果我们完全随机的对样本分类,那么AUC应该接近0.5。
首先是预测,cpu需要通过既有个个进程的资源占用率来预测未来分配,以减少不当分派带来的 fairness between threads & security maliciousness 。同时,有些线程( OS 上多了)或程序跑到某一时间段后从安全态到了非安全态或切换状态的时候会影响调度结果,所以 CPU scheduler 也会预测各个分支。现在工业界主要的方法是启发式的预测方法,这比 OS 课上习得的 FCFS、SJF、SRFS 以及古老的RR要性能好。
文章中提到了一个幽灵漏洞的 naïve version 的攻击方式,叫 speculative execution attacks 。假设 Cache will point to the place in drive (这里的 Cache 上不一定存数据,可能是映射到另外一个地址空间),那就提出一个新的 cache 区域作为 buffer ,经过 LLC 后放在 Cache 上。读入一遍后,只要看速度就能得出漏洞可能存在的位置。
I faced a linking problem in cuda 10.0 is neither 10.1 nor 10, so we have to change it. Also cublas is not in the lib64 directory but in gnu directory in 10.1.
2019-08-12 06:57:41.705482: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-08-12 06:57:41.707082: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties:
name: GeForce RTX 2070 with Max-Q Design major: 7 minor: 5 memoryClockRate(GHz): 1.185
pciBusID: 0000:01:00.0
2019-08-12 06:57:41.707394: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcudart.so.10.0'; dlerror: libcudart.so.10.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: :/usr/local/cuda/lib:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64
2019-08-12 06:57:41.707637: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcublas.so.10.0'; dlerror: libcublas.so.10.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: :/usr/local/cuda/lib:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64
2019-08-12 06:57:41.707908: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcufft.so.10.0'; dlerror: libcufft.so.10.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: :/usr/local/cuda/lib:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64
2019-08-12 06:57:41.708132: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcurand.so.10.0'; dlerror: libcurand.so.10.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: :/usr/local/cuda/lib:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64
2019-08-12 06:57:41.708355: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcusolver.so.10.0'; dlerror: libcusolver.so.10.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: :/usr/local/cuda/lib:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64
2019-08-12 06:57:41.708580: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcusparse.so.10.0'; dlerror: libcusparse.so.10.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: :/usr/local/cuda/lib:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64
2019-08-12 06:57:41.708639: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2019-08-12 06:57:41.708665: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1641] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices…
2019-08-12 06:57:41.709682: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-08-12 06:57:41.709692: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165]
cd /usr/local/cuda/lib64 && sudo ln lib*.so.10 lib*.so.10.0
cd /usr/lib/x86_64-linux-gnu && sudo ln libcublas.so.10 /usr/local/cuda/lib64/libcublas.so.10.0
again you can re start your python console and do tensorflow-gpu again
python -c import tensorflow as tf; print("GPU Available: ", tf.test.is_gpu_available())
PS C:\Users\AERO> docker attach mynginx^C
PS C:\Users\AERO> docker attach dgl2019/icse2019-artifacts
Error: No such container: dgl2019/icse2019-artifacts
PS C:\Users\AERO> docker attach docker.io/dgl2019/icse2019-artifacts
Error: No such container: docker.io/dgl2019/icse2019-artifacts
PS C:\Users\AERO> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
PS C:\Users\AERO> docker pull dgl2019/icse2019-artifacts
Using default tag: latest
latest: Pulling from dgl2019/icse2019-artifacts
Digest: sha256:ddf6ceb380481b67485b18728f302958113569ea8571b9fcd78439724eeaaef8
Status: Image is up to date for dgl2019/icse2019-artifacts:latest docker.io/dgl2019/icse2019-artifacts:latest
PS C:\Users\AERO> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
PS C:\Users\AERO> docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
dgl2019/icse2019-artifacts latest a5a18674d9a4 6 months ago 6.43GB
PS C:\Users\AERO> docker exec -it a5a18674d9a4 /bin/bash
Error: No such container: a5a18674d9a4
PS C:\Users\AERO> docker exec -it dgl2019/icse2019-artifacts /bin/bash
Error: No such container: dgl2019/icse2019-artifacts
PS C:\Users\AERO> docker exec -it dgl2019 /bin/bash
Error: No such container: dgl2019
PS C:\Users\AERO> docker run -it dgl2019/icse2019-artifacts /bin/bash
root@ef79040f7919:/# ls
bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
root@ef79040f7919:/# cd home/
root@ef79040f7919:/home# ls
icse2019
root@ef79040f7919:/home# cd icse2019/
root@ef79040f7919:/home/icse2019# ls
source
root@ef79040f7919:/home/icse2019# cd source/ root@ef79040f7919:/home/icse2019/source# ls
__init__.py attacks build-in-resource config detect lcr_auc model_mutation models scripts utils
root@ef79040f7919:/home/icse2019/source# cd config/
root@ef79040f7919:/home/icse2019/source/config# ls
logging.yaml
root@ef79040f7919:/home/icse2019/source/config# cd ..
root@ef79040f7919:/home/icse2019/source# cd model
bash: cd: model: No such file or directory
root@ef79040f7919:/home/icse2019/source# ls
__init__.py attacks build-in-resource config detect lcr_auc model_mutation models scripts utils
root@ef79040f7919:/home/icse2019/source# cd models
root@ef79040f7919:/home/icse2019/source/models# ls
__init__.py __init__.pyc ensemble_model.py ensemble_model.pyc googlenet.py googlenet.pyc lenet.py lenet.pyc
root@ef79040f7919:/home/icse2019/source/models# python lenet.py root@ef79040f7919:/home/icse2019/source/models# uname -r
4.9.184-linuxkit
root@ef79040f7919:/home/icse2019/source/models# uname -a Linux ef79040f7919 4.9.184-linuxkit #1 SMP Tue Jul 2 22:58:16 UTC 2019 x86_64 GNU/Linux
root@ef79040f7919:/home/icse2019/source/models# ls __init__.py __init__.pyc ensemble_model.py ensemble_model.pyc googlenet.py googlenet.pyc lenet.py lenet.pyc root@ef79040f7919:/home/icse2019/source/models# cd .. root@ef79040f7919:/home/icse2019/source# ls
__init__.py attacks build-in-resource config detect lcr_auc model_mutation models scripts utils
root@ef79040f7919:/home/icse2019/source# cd utils/ root@ef79040f7919:/home/icse2019/source/utils# ls
__init__.py data_manger.pyc logging_util.pyc model_trainer.py pytorch_extend.pyc
__init__.pyc imgnet12-valprep.sh model_manager.py model_trainer.pyc time_util.py
data_manger.py logging_util.py model_manager.pyc pytorch_extend.py time_util.pyc
root@ef79040f7919:/home/icse2019/source/utils# ls
__init__.py data_manger.pyc logging_util.pyc model_trainer.py pytorch_extend.pyc
__init__.pyc imgnet12-valprep.sh model_manager.py model_trainer.pyc time_util.py
data_manger.py logging_util.py model_manager.pyc pytorch_extend.py time_util.pyc
root@ef79040f7919:/home/icse2019/source/utils# cd .. root@ef79040f7919:/home/icse2019/source# cd scripts/ root@ef79040f7919:/home/icse2019/source/scripts# ./craftAdvSamples.sh
NOTE: Our experiments are only based on two datasets: mnist and cifar10,
but it is a piece of cake to extend to other datasets only providing a proper pytorch-style data loader tailored to himself datasets. Each attack manner has different parameters. All the parameters are organized in a list.The order of the parameters can be found in the REDME in this folder. To quickly yield adversarial samples, we provide a default setting for each attack manner.Do you want to perform an attack with the default settings?y/n y dataType ( [0] mnist; [1] cifar10):1 attackType:fgsm =======>Please Check Parameters<======= modelName: googlenet modelPath: ../build-in-resource/pretrained-model/googlenet.pkl dataType: 1 sourceDataPath: ../build-in-resource/dataset/cifar10/raw attackType: fgsm attackParameters: 0.03,true savePath: ../artifacts_eval/adv_samples/cifar10/fgsm device: -1 <======>Parameters=======> Press any key to start attack process CTRL+C break command bash... Crafting Adversarial Samples.... targeted model: Average loss: -11.4510, Accuracy: 9049/10000 (90.49%) ./craftAdvSamples.sh: line 129: 34 Killed python -u $exe_file --modelName ${modelName} --modelPath ${modelPath} --dataType ${dataType} --sourceDataPath ${sourceDataPath} --attackType ${attackType} --attackParameters ${attackParameters} --savePath ${savePath} --device ${device} DONE! root@ef79040f7919:/home/icse2019/source/scripts# nvidia-smi bash: nvidia-smi: command not found root@ef79040f7919:/home/icse2019/source/scripts# ks bash: ks: command not found root@ef79040f7919:/home/icse2019/source/scripts# lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 2 On-line CPU(s) list: 0,1 Thread(s) per core: 2 Core(s) per socket: 1 Socket(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 158 Model name: Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz Stepping: 10 CPU MHz: 2362.464 BogoMIPS: 4724.92 Hypervisor vendor: Microsoft Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 12288K root@ef79040f7919:/home/icse2019/source/scripts# lspci bash: lspci: command not found root@ef79040f7919:/home/icse2019/source/scripts# dmseg bash: dmseg: command not found root@ef79040f7919:/home/icse2019/source/scripts# dmesg dmesg: read kernel buffer failed: Operation not permitted root@ef79040f7919:/home/icse2019/source/scripts# sudo dmesg bash: sudo: command not found root@ef79040f7919:/home/icse2019/source/scripts# ^Cdo dmesg root@ef79040f7919:/home/icse2019/source/scripts# ./advSampelsVerify.sh 0 ../artifacts_eval/adv_samples/mnist/fgsm/2019-01-13_03:48:45 -1 1 Traceback (most recent call last): File "../attacks/attack_util.py", line 369, in test_adv_samples() File "../attacks/attack_util.py", line 318, in test_adv_samples ]), show_file_name=True, img_mode=img_mode, max_size=10000) File "../utils/data_manger.py", line 240, in __init__ all_files = np.array([img_file for img_file in os.listdir(root)]) OSError: [Errno 2] No such file or directory: '../artifacts_eval/adv_samples/mnist/fgsm/2019-01-13_03:48:45' root@ef79040f7919:/home/icse2019/source/scripts# ls advSampelsVerify.sh craftAdvSamples.sh default_cifar10_auc_analysis.sh default_mnist_auc_analysis.sh detect.sh lcr_acu_analysis.sh modelMuated.sh root@ef79040f7919:/home/icse2019/source/scripts# ./advSampelsVerify.sh 0 ../a artifacts_eval/ attacks/ root@ef79040f7919:/home/icse2019/source/scripts# ./advSampelsVerify.sh 0 ../artifacts_eval/adv_samples/cifar10/fgsm/2019-08-07_00\:04\:11/ -1 1 Total:0,Success:0 root@ef79040f7919:/home/icse2019/source/scripts# ./craftAdvSamples.sh NOTE: Our experiments are only based on two datasets: mnist and cifar10, but it is a piece of cake to extend to other datasets only providing a proper pytorch-style data loader tailored to himself datasets. Each attack manner has different parameters. All the parameters are organized in a list.The order of the parameters can be found in the REDME in this folder. To quickly yield adversarial samples, we provide a default setting for each attack manner.Do you want to perform
an attack with the default settings?y/n
y
dataType ( [0] mnist; [1] cifar10):0
attackType:fgsm
=======>Please Check Parameters<======= modelName: lenet modelPath: ../build-in-resource/pretrained-model/lenet.pkl dataType: 0 sourceDataPath: ../build-in-resource/dataset/mnist/raw attackType: fgsm attackParameters: 0.35,true savePath: ../artifacts_eval/adv_samples/mnist/fgsm device: -1 <======>Parameters=======>
Press any key to start attack process
CTRL+C break command bash...
Crafting Adversarial Samples....
targeted model: Average loss: -12.9422, Accuracy: 9829/10000 (98.29%)
y
Eps=0.35: Average loss: -7.5184, Accuracy: 7775/9829 (79.10%)
successful samples 2054
Done!
icse19-eval-attack-fgsm: rename 125, remove 40,success 1889
Adversarial samples are saved in ../artifacts_eval/adv_samples/mnist/fgsm/2019-08-07_00:58:21
DONE!
root@ef79040f7919:/home/icse2019/source/scripts# /modelMuated.sh
bash: /modelMuated.sh: No such file or directory
root@ef79040f7919:/home/icse2019/source/scripts# 。/modelMuated.sh bash: 。/modelMuated.sh: No such file or directory
root@ef79040f7919:/home/icse2019/source/scripts# ./modelMuated.sh NOTE: Our experiments are only based on two datasets: mnist and cifar10,
but it is a piece of cake to extend to other datasets only providing a
proper pytorch-style data loader tailored to himself datasets.
To quickly verify the mutation process, we provide a group of default parameters,do you want to quickly start the
program?y/n
y
=======>Parameters<======= modelName: lenet modelPath: ../build-in-resource/pretrained-model/lenet.pkl accRation: 0.9 dataType: 0 numMModels: 10 mutatedRation: 0.001 opType: GF savePath: ../artifacts_eval/modelMuation/ device: -1 <======>Parameters=======>
Press any key to start mutation process
CTRL+C break command bash...
2019-08-07 00:59:08,632 - INFO - data type:mnist
2019-08-07 00:59:08,637 - INFO - >>>>>>>>>>>>Start-new-experiment>>>>>>>>>>>>>>>>
2019-08-07 00:59:10,078 - INFO - orginal model acc=0.9829
2019-08-07 00:59:10,079 - INFO - acc_threshold:88.0%
2019-08-07 00:59:10,079 - INFO - seed_md_name:lenet,op_type:GF,ration:0.001,acc_tolerant:0.9,num_mutated:10
2019-08-07 00:59:10,091 - INFO - 61/61706 weights to be fuzzed
2019-08-07 00:59:12,375 - INFO - Mutated model: accurate 0.9818
2019-08-07 00:59:12,379 - INFO - Progress:1/10
2019-08-07 00:59:12,388 - INFO - 61/61706 weights to be fuzzed
2019-08-07 00:59:13,678 - INFO - Mutated model: accurate 0.9832
2019-08-07 00:59:13,682 - INFO - Progress:2/10
2019-08-07 00:59:13,691 - INFO - 61/61706 weights to be fuzzed
2019-08-07 00:59:14,958 - INFO - Mutated model: accurate 0.9823
2019-08-07 00:59:14,960 - INFO - Progress:3/10
2019-08-07 00:59:14,971 - INFO - 61/61706 weights to be fuzzed
2019-08-07 00:59:16,271 - INFO - Mutated model: accurate 0.9827
2019-08-07 00:59:16,274 - INFO - Progress:4/10
2019-08-07 00:59:16,283 - INFO - 61/61706 weights to be fuzzed
2019-08-07 00:59:17,582 - INFO - Mutated model: accurate 0.9829
2019-08-07 00:59:17,586 - INFO - Progress:5/10
2019-08-07 00:59:17,595 - INFO - 61/61706 weights to be fuzzed
2019-08-07 00:59:18,764 - INFO - Mutated model: accurate 0.9829
2019-08-07 00:59:18,767 - INFO - Progress:6/10
2019-08-07 00:59:18,777 - INFO - 61/61706 weights to be fuzzed
2019-08-07 00:59:19,921 - INFO - Mutated model: accurate 0.982
2019-08-07 00:59:19,923 - INFO - Progress:7/10
2019-08-07 00:59:19,932 - INFO - 61/61706 weights to be fuzzed
2019-08-07 00:59:21,072 - INFO - Mutated model: accurate 0.9823
2019-08-07 00:59:21,075 - INFO - Progress:8/10
2019-08-07 00:59:21,086 - INFO - 61/61706 weights to be fuzzed
2019-08-07 00:59:22,262 - INFO - Mutated model: accurate 0.983
2019-08-07 00:59:22,264 - INFO - Progress:9/10
2019-08-07 00:59:22,274 - INFO - 61/61706 weights to be fuzzed
2019-08-07 00:59:23,442 - INFO - Mutated model: accurate 0.9824
2019-08-07 00:59:23,444 - INFO - Progress:10/10
The mutated models are stored in ../artifacts_eval/modelMuation/2019-08-07_00:59:08/gf0.001/lenet
root@ef79040f7919:/home/icse2019/source/scripts# ../build-in-resource/nr-lcr/mnsit/lenet/gf/5e-2p/nrLCR.npy../build-in-resource/nr-lcr/mnsit/lenet/gf/5e-2p/nrLCR.npy^C
root@ef79040f7919:/home/icse2019/source/scripts# ../build-in-resource/nr-lcr/mnsit/lenet/gf/5e-2p/nrLCR.npy
bash: ../build-in-resource/nr-lcr/mnsit/lenet/gf/5e-2p/nrLCR.npy: Permission denied
root@ef79040f7919:/home/icse2019/source/scripts# sudo ../build-in-resource/nr-lcr/mnsit/lenet/gf/5e-2p/nrLCR.npy
bash: sudo: command not found
root@ef79040f7919:/home/icse2019/source/scripts# chmod 777 ../build-in-resource/nr-lcr/mnsit/lenet/gf/5e-2p/nrLCR.npy
root@ef79040f7919:/home/icse2019/source/scripts# ../build-in-resource/nr-lcr/mnsit/lenet/gf/5e-2p/nrLCR.npy bash: ../build-in-resource/nr-lcr/mnsit/lenet/gf/5e-2p/nrLCR.npy: cannot execute binary file: Exec format error
root@ef79040f7919:/home/icse2019/source/scripts# cat ../build-in-resource/nr-lcr/mnsit/lenet/gf/5e-2p/nrLCR.npy
�NUMPY v {'descr': '<f8', 'fortran_order':="" false,="" 'shape':="" (1000,),="" }="" j�t�x?�i="" +��?�i="" +��?="" ���mb�?="" ���mbp?="" ���mb�?��&��?="" �="" ���mb`?="" ���mb�?��~j�t�?="" ���mbp?��g�z�?="" j�t�x?="" j�t="" ��?="" ���mbp?@�o��n�?h��|?5�?��g�z�?="" v-��?��g�z�?="" ���mb`?8�o��n�?="" ��mb`?="" @�o��n�?="" ���mbp?8��v���?="" �?="" ��="" j�t�x?�������?��g�z�?="" ���mb`?��g�z�?="" ���mbj�t�x?="" j�t��?="" ����mb�?l7�a`��?="" ���x�&�?="" ���s="" �?@�o��n�?="" ��ʡe��?="" ���mb�?����s�?="" ���mb`?�������?="" j�t�x���mb�?="" ��&��?="" ���mb�?@�o��n�?="" q����?="" @��v���?="" b`?="" ���mb`?����mb�?��&��?="" ���mb`?��&��?="" ���mb`?���(\="" bp?="" ���mb`?�i="" �l�����?="" j�&1�?="" j�t�x?��&��?="" ���mb`?�zd�?�ʡe���?="" ����mb`?="" 8��v���?="" ���mb`?��~j�t�?="" j�t�x?x9��v��?`��"���?�x�&��?="" ���mb`?��q��?��&��?="" ���mb�?��g�z�?="" �t��?="" j�t��?�l�����?="" ���mbp?�l�����?="" ?="" ���q��?="" ��b�?���s㥫?�i="" ���mbp?"��~j��?="" j�t��?��&��?="" ���mbp?��&��?="" jp?="" ��g�z�?="" `9��v��?="" ���mb�?�l�����?x9��v��?="" ���mbp���mb`?="" ���mb�?��g�z�?@�o��n�?="" j�t�x?zd;�o��?="" �"��~j�?="" j�tb`?="" ���mb�?�i="" +��?@��v���?="" ���mb�?��g�z�?�v��="" ���mb�?~j�t��?="" ���mb`?���s��?8��v���?="" +��?��g�z�?�q�����mb`?="" ���mb`?�v-�?="" ���mbp?8�o��n�b`?l7�a`��?="" ��c�l��?="" j�t��?�a�g�z�?���q��?="" j�t�x?�~j�t��?��&��?="" j�t�x?="" ��g�z�?�v��="" ���mb�?أp="��?" �i="" ���mbp?ȡe����?="" ��&��?@��v���?="" v-��?@��v���?="" j�t�x?�z�g��?="" ���mbp?��~j�t�?="" ���mb`?@�o��n�?="" ���mbp?䥛�="" ��g="" �q����?="" v-��?="" j�t�x?@��v���?="" ��g�z�b`?="" ���mb`?��="" �rh�?="" j�t��?��&��?��ʡe��?="" ��p?="" �����k�?="" ���mb�?="" @��v�="" -��?="" ���mb`?h�t��?��&��?�i="" +��?��&��?�������?��g�z�?="" mb`?="" 8�o��n�?="" ���mb`?̡e����?="" +��?��g�z�?="" ���mbp?@�o��n�?="" ��g�z�?�i="" `?��&��?��&��?="" `d;�o��?="" ���mbp?&�z�?="" @`��"��?="" +��?�"��~j�?="" j�t��?@�o��n�?mb`?="" ����mb�?�"��~j�?="" 433333�?="" ���mb`?�|?5^��?��&��?="" @�o��n�?�&��?="" ���m���mb`?�v��="" ���mb`?�~j�t��?="" l7�a`��?="" ���mbp?���x�&�?="" b`?�i="" +��?���q��?��c�l�?="" �������?����mb�?="" ���mb`?root@ef79040f7919:="" home="" icse2019="" source="" scripts#="" root@ef79040f7919:="" .="" lcr_acu_analysis.sh="" note:="" our="" experiments="" are="" only="" based="" on="" two="" datasets:="" mnist="" and="" cifar10,="" but="" it="" is="" a="" piece="" of="" cake="" to="" extend="" other="" datasets="" providing="" proper="" pytorch-style="" data="" loader="" tailored="" himself="" datasets.="" quickly="" label="" change="" rate="" auc="" statistics="" ,="" we="" provide="" group="" default="" parameters,do="" you="" want="" start="" the="" program?y="" n="" y="" please="" lcr="" result="" normal="" samples="" for="" computinglease="" test.="" do="" have="" results="" samples?(y="" n)y="" path="" normal's="" list:..="" build-in-resource="" nr-lcr="" mnsit="" lenet="" gf="" 5e-2p="" nrlcr.npy="======">Please Check Parameters<======= dataType: mnist device: -1 testType: adv useTrainData: False batchModelSize: 2 maxModelsUsed: 10 mutatedModelsPath: ../build-in-resource/mutated_models/mnist/lenet/gf/5e-2p/ testSamplesPath: ../build-in-resource/dataset/mnist/adversarial/jsma/ seedModelName: lenet test_result_folder: ../lcr_auc-testing-results/mnist/lenet/gf/5e-2p/jsma/ The test will be divided into 5 batches The logs will be saved in: ../lcr_auc-testing-results/mnist/lenet/gf/5e-2p/jsma/-2019-08-07-01 is_adv: True nrLcrPath: ../build-in-resource/nr-lcr/mnsit/lenet/gf/5e-2p/nrLCR.npy <======>Parameters=======>
Press any key to start mutation process
CTRL+C break command bash...
batch:1
model_start_no:1
batch:2
model_start_no:3
batch:3
model_start_no:5
batch:4
model_start_no:7
batch:5
model_start_no:9
Testing Done!
>>>>>>>>>>>seed data:mnist,mutated_models:../build-in-resource/mutated_models/mnist/lenet/gf/5e-2p/<<<<<<<<<< >>>>>>>>>>>mnist<<<<<<<<<<<<<< Total Samples Used:1000,auc:0.9871,avg_lcr:0.5032,std:0.1693,confidence(95%):0.0105,confidence(98%):0.0125,confidence(99%):0.0138 root@ef79040f7919:/home/icse2019/source/scripts# ./detect.sh NOTE: Our experiments are only based on two datasets: mnist and cifar10, but it is a piece of cake to extend to other datasets only providing a proper pytorch-style data loader tailored to himself datasets. To quickly perform adversarial detection, we provide a group of default parameters,do you want to quickly start the program?y/n y ./detect.sh: line 44: $'\n\nthrehold threhold\nextendScale extendScale\nrelaxScale relaxScale\nmutatedModelsPath mutatedModelsPath\nalpha alpha\nbeta beta\ntestSamplesPath testSamplesPath\ndataType dataType\ntestType testType\nseedModelPath seedModelPath\n\n': command not found =======>Please Check Parameters<======= threhold: 0.0441 extendScale: 1.0 relaxScale: 0.1 mutatedModelsPath: ../build-in-resource/mutated_models/mnist/lenet/nai/5e-2p/ alpha: 0.05 beta: 0.05 testSamplesPath: ../build-in-resource/dataset/mnist/adversarial/jsma/ dataType: 0 testType: adv seedModelPath: ../build-in-resource/pretrained-model/lenet.pkl mutatedModelsPath: ../build-in-resource/mutated_models/mnist/lenet/nai/5e-2p/ device: -1 <======>Parameters=======>
Press any key to start mutation process
CTRL+C break command bash...</f8',>
Processed:100.00 %adverage accuracy:0.998, avgerage mutated used:35.538
root@ef79040f7919:/home/icse2019/source/scripts