ASPLOS23 attendency

这次投了一篇workshop,但是签证问题(中途办其他签证寄护照寄不到了。),所以这次又得是一个线上会议,说实话我只关注CXL和codesign,本来可以见见刘神和jovan大师还有yan大师的,还是多写点代码和paper留到下次吧。机票是25号去西雅图的,也退不了,我现在改了护照寄到西雅图,如果最后一天见到我不要奇怪。

Xiangshan Tutorial

口语不太行,比b站上的资料烂。

Firesim

主要是介绍他们的firesim的,就问他们什么时候更新f1 vu9p

LATTE

Performance vs. Correctness When Writing Low-Level HPC Code

Exploring Performance of Cache-Aware Tiling Strategies in MLIR Infrastructure

Intel OneDNN在MLIR上approach

PyAIE: A Python-based Programming Framework for Versal ACAP AI Engines

Versal ACAP HLS

A Scalable Formal Approach for Correctness-Assured Hardware Design

Jin Yang 大师的,之前在AHA讲过了,

Designing a Dataflow Hardware Accelerator with an Abstract Machine

ASTRA-sim: Enabling SW/HW Co-Design Exploration for Distributed Deep Learning Training Platforms

Yarch

Accelerating Sparse Tensor Algebra by Overbooking Buffer Occupancy

Detecting Microarchitectural Vulnerabilities via Fuzz Testing of White-box CPUs

用fuzzing地手段找Store Bypass。

ConstSpec: Mitigating Cache-based Spectre Attacks via Fine-Gain Constant-Time Accesses

SMAD: Efficiently Defending Against Transient Execution Attacks

这次被分配的mentor的学生的,这个mentor在GPU side channel很著名。

FireSim and Chipyard User/Developer Workshop

Integrating a high performance instruction set simulator with FireSim to cosimulate operating system boots By tesorrent

主要讲了怎么在firesim上敏捷开发

Session 1B: Shared Memory/Mem Consistency

这个chair是admit,辣个VMWare最会排列组合Intel ext的男人

Cohort: Software-Oriented Acceleration for Heterogeneous SoCs

这篇是在fpga上自己定义L1/L2 cache和crypto accelerator。然后怎么弄在一起,在CXL.cachehze就不是一个问题。

Probabilistic Concurrency Testing for Weak Memory Programs

一个PCT Frameware,用SC的规范来assert,找bug。

![](media/16792628578417/16799399312657





hit bug 更快



Hieristic for h is good enough for data structure test. assertion tests looks great, When I was in shanghaitech, there’s people using the same tool on PM.










'

MC Mutants: Evaluating and Improving Testing for Memory Consistency Specifications






Transform disallowed memory to weak memory label.



一个binary translator



Protect the System Call, Protect (Most of) the World withBASTION

Session 2A: Compiler Techniques & Optimization

SPLENDID: Supporting Parallel LLVM-IR Enhanced Natural Decompilation for Interactive Development

Beyond Static Parallel Loops: Supporting Dynamic Task Parallelism on Manycore Architectures with Software-Managed Scratchpad Memories

Graphene: An IR for Optimized Tensor Computations on GPUs

Coyote: A Compiler for Vectorizing Encrypted Arithmetic Circuits

NNSmith: Generating Diverse and Valid Test Cases for Deep Learning Compilers

刘神的

Session 5B (Storage)

Session 7A (Deep Learning Systems)

About the MHSR of the LLC miss with CXL.mem devices

In [1], the author talked about the Asynchronous Memory Unit that the CPU and Memory controller needs to support of co-design.

The overhead of hardware consistency checking is one reason that limits the capacity of traditional load/store queues and MSHRs. The AMU leaves the consistency issue to the software. They argue that software and hardware cooperation is the right way to exploit the memory parallelism over large latency for AMU.

As shown in the Figure of sensitivity tests in [2], the decomposition analysis of DirectCXL shows a completely different result: no software and no data copy overhead. As the payload increases, the main component of the DirectCXL latency is the LLC (CPU Cache). This is because the Miss State Holding Register (MSHR) in the CPU LLC can handle 16 concurrent misses, so with large payload data, many memory requests (64B) are suspended on the CPU, and processing a 4KB payload takes up 67% of the total latency.

The conclusion is MHSR inside the CPU is not enough to deal with memory load in the CXL.mem world, and both the latency and the bandwidth are so diverse across the serial PCIe5 lane. Also, another possible outcome compared with RDMA SRQ approach of the controller, we think the PMU and semantics of coherency still matter and the future way of persistency according to the Huawei's approach and SRQ approaches will fall back to ld/st but with a smarter leverage in the MC that asynchronously ld/st the data.

Reference

  1. Asynchronous memory access unit for general purpose processors
  2. Direct Access, High-Performance Memory Disaggregation with DirectCXL

Qemu CXL type 1 emulation proposal

Introduction to CXL Type 1

Guided Usecase

[1] and [2] are just qemu's implementation of dm-crypto for LUKS, that every device mapper over a physical block device will require a key and a crypto accelerator/software crypto implementation to decrypt to get the data. We implement a crypto accelerator CXL type 1 semantic over a framework of virtio-crypto-pci. We want to emulate the mal state or unplug the crypto device; the kernel will get ATS bit DMAed data and will resume by CPU software crypto implementation.

Device emulation

DMA and access memory

Create a CacheMemRegion that maps a specific SPP region for one on one mapping of a bunch of CXL.cache caches on a CXL device.

Crypto operations

When calling crypto operations in the kernel, we actually offload the encrypt/decrypt operations to the type 1 accelerator through CXL.io, which tells the device cast on operation on arbitrary SPP. The accelerator will first take ownership of arbitrary SPP in the CacheMemRegion and notify the host. Eventually, the host will get the shared state of the SPP's cacheline.

Cache coherency emulation

struct D2HDataReq = {
    D2H DataHeader 24b;
    opcode 4b;
    CXL.cache Channel Crediting;
}
struct CXLCache = { 
    64Byte Data; 
    MESI 4 bit;
    ATS 64 bit;
    [D2HDataReq;n] remaining bit;
}

Metadata with Intel SPP write protection support. Mark the access to arbitrary cacheline to the SPP. We need to perform all the transaction descriptions and queuing in the 64Byte of the residue data in the SPP. The arbitrary operation according to the queue will cast on the effect of MESI bit change and update writes protection for the Sub Page and Root Complex or other side effects like switches change.

The host and device's request is not scheduled FIFO, but the host's seeing the data will have better priority. So the H2D req will be consumed first and do D2H FIFO. All the operations follow interface operation to CXLCache.

Taking exclusiveness-able

We mark the Transportation ATS bit be whether taking exclusiveness-able. and copy the cacheline in another map; once emulated unplugged, the cacheline is copied back for further operation of the kernel to resume software crypto calculation.

How to emulate the eviction

We have two proposals

  1. qemu pebs to watch the cache evict of the physical address of an SPP
  2. use sub-page pin page_get_fast to pin to a physical address within the last level cache. [7]

Reference

  1. https://www.os.ecc.u-tokyo.ac.jp/papers/2021-cloud-ozawa.pdf
  2. https://people.redhat.com/berrange/kvm-forum-2016/kvm-forum-2016-security.pdf
  3. https://yhbt.net/lore/all/[email protected]/T/
  4. https://privatewiki.opnfv.org/_media/dpacc/a_new_framework_of_cryptography_virtio_driver.pdf
  5. https://github.com/youcan64/spp_patched_qemu
  6. https://github.com/youcan64/spp_patched_linux
  7. https://people.kth.se/~farshin/documents/slice-aware-eurosys19.pdf

Copy-on-Pin: The Missing Piece for Correct Copy-on-Write @ASPLOS’23

Nadav has been enumerating the Intel extensions providing support for virtualization for VMware and providing security mitigation or debugging applying for the Intel extensions. And provides things like userspace memory remote paging [2] for providing VMware a better service disaggregation technology. They've been investigating the vulnerability of IOMMU with the DMA [1] and remote TLB shootdown performance bugs(updating the page table will incur TLB shootdown) by introducing con-current flushing, early acknowledgment, cacheline consolidation, and in-context TLB flushes.

This paper examines the interaction between COW and pinned pages, which are pages that cannot be moved or paged out to allow the OS or I/O devices to access them directly.

Basically, we need a COW-share prevention on the pinned page. The Missing Piece for Correct Copy-on-Write which considers how COW interacts with other prevalent OS mechanisms such as POSIX shared mapp1ings, caching references, and page pinning. It defines an invariant that indicates if there is any private writable mapping it must be a single exclusive mapping and provides test cases to evaluate COW and page pinning via O_DIRECT read()/write() in combination with fork() and write accesses.

For implementation, they made a tool similarly to dynamic taint analysis that mark an exclusive flag for page(possibly of CXL to make a hardware software codesign of this, but in a cacheline or page granularity). This flag also introduces refinements to avoid unnecessary copies and handles swapping, migration and read-only pinning correctly. An evaluation of the performance of RelCOP compared to two prior COW handling schemes shows that it does not introduce noticeable overheads. An implementation of this design was integrated into upstream Linux 5.19 with 747 added and 340 removed lines of code. Evaluation results show that RelCOP performs better than PreCOP by up to 23% in the sequential access benchmark and 6% in the random access benchmark without introducing noticeable overheads.

Reference

  1. Characterizing, Exploiting, and Detecting DMA CodeInjection Vulnerabilities in the Presence of an IOMMU @Eurosys'20
  2. https://patentimages.storage.googleapis.com/74/32/e2/d300f0489ffc90/US20220398199A1.pdf
  3. Don't shoot down TLB shootdowns!

Moving Disaggregation to CXL

Today, after listening to the latest pre-CXL work of RDMA like carbink, AIFM, compucache, infiniswap, fastswap, memliner/ clover, dinomo/ RACE Hashing, sherman, fusee. I'm wondering much disaggregated memory has been deployed on the RNIC manner.

We will be weighing implementation ideas from research papers versus 3 critical requirements of Remoteable Pointers

  1. Must work from the source as pointers even when the memory is far (requires zero implementation in CXL for the most part)
  2. Must work at the device for offloading pointer chasing to CXL memory device or pre-CXL memory node
  3. Must work at newly started compute without the friction of serialization-deserialization for independent scaling of memory and compute

Is Phantom address a good solution?

Is wasm a good solution?

Reference

  1. InfiniFilter: Expanding Filters to Infinity and Beyond @SIGMOD'23
  2. Sherman: A Write-Optimized Distributed B+Tree Index on Disaggregated Memory @SIGMOD'22

Is MMAP still good for Post CXL era?

A short answer is no.

  1. MMAP a huge file need OS to register a virtual address to mmap the file on; once any request to the file is made, we may use page fault to load the file from disk to the private DRAM and setup the va_to_pa and buffer the file part in the DRAM, maybe use TLB to cache the next read. Every CXL device has it own mapping of memory; if you MMAP memory that was swapped onto CXL.mem devices like memory semantic SSD, the controller of SSD may decide whether to put on on-SSD DRAM or SSD and, in the backend, write through everything on physical media. CXL vendors drastically want to implement the defered allocation that lazily setup the physical memory to the virtual mmemory, which overlaps the MMAP mechenism.
  2. MMAP + madvise/numabind to certain CXL attached memory may cause migration efforts. Once you dirty write the pages, the transaction is currently not yet introduced in the CXL protocol. The process takes pains to implement the mechesim correctly. Instead, we can do something like TPP or CXLSwap, making everything transparent to applications. Or, we can make 3D memory and extend computability in CXL controller to decide where to put the data and maintain the transaction under the physical memory.
  3. MMAP is originally designed for a fast track memory together with a slower track disk like HDDs. Say you are loading graph edges from a large HDD backed pool. The frequently accessed part will be softwarely defined as a stream pool for cold/hot data management. Here MMAP can both leverage the OS page cache semantic transparently, but it's not case with more and faster endpoints. With more complexity of topology of CXL NUMA devices, we could handle fewer error at a time and serve more for the speed of main bus. Thus, we don't stop for page fault and requires those be handled in endpoints side.

Thus we still need SMDK such management layer to make jemalloc+libnuma+CXLSwap for CXL.mem. For interface with CXL.cache devices, I think defer allocation and managing everything through virtual memory would be fine. Thus we don't need programming models like CUDA; rather, we can static analysis through MLIR to do good data movement hint to every CXL controller's MMU and TLB. We could leverage CXL.cache cacheline state to treat as streaming buffer so that every possible endpoints read for and then do updates by next write.

Reference

  1. https://db.cs.cmu.edu/mmap-cidr2022/
  2. https://blog.csdn.net/juS3Ve/article/details/90153094

论学习到科研的转变与科研的探索与谋生

学习是一种广泛的获取知识的过程,如果是为了高考或者是大学的期末考试的学习,其目的还是让学生掌握老师想让学生掌握的部分,是本专业所必需的基础理论和基本技能,但是老师并不负责知识的有效性和时效性.没了外力约束的学习,即一个人不需要获得GPA而获得荣誉或者去更好的地方学习,本科毕业的最低要求是各个课程及格,从而使得大量的废物产生在世界上.高等教育的学习,在我看来是在获得基本的求生技能的基础上培养一个人独立思考的能力,需要从社会的进步浪潮中发掘自己的擅长,如何与世界的进步发展同进步.就计算机专业而言,实践是获得此种能力的最好方式,培养实践能力比学习理论知识重要,因为工科的学习一切都有例外,没有办法用一个思辨的角度来发掘别的公司或者独立开发者为什么这么写,容易陷入过于苛求完美的循环中.工科的学习是循序渐进的,实践能力会随着眼界的拓宽而日益增长,直到自己也能造火车的那一天.PhD阶段的学习是从科研中来到科研中去的,我们无需关注无关自己课题的一切知识,其只能阻碍自己的钻研时间,而拓宽视野的过程应该流于与其他研究者的讨论,实践比赛或复现别人的artifacts,参与工作的实习,而不是单纯的从书本上寻找自己课题的答案.对于未来想要做的课题,可以提前参阅书籍和工业界的best practice,在潜意识中思考工业界没有想到过的部分,体现自己的价值.从学习到科研的转变,是一个人的学习热情从外驱力到内驱力,我认为什么东西工业界没有想到,或者没有提供全工业界的价值,而流于一些头部公司的东西,怎么更好的服务工业界.

从与已经失去科研兴趣的人交流过后,他们大多都在无尽的尝试中失去寻找自己可以作为PhD可以提供的价值,而放弃了探索,将PhD作为一种逃避经济衰退的手段,而苟活于柴米油盐,骗funding谋生.即便他们来自四大,即便他们认为这个世界已经被那些有规模效应的公司所占据了.什么是学术界的探索?究竞是所谓的peer review规则带来的劣币驱逐良币,还是PhD及其导师所想到的可以变革工业界思考的良币驱逐劣币?spark是一个很好的例子,Matai调研Facebook使用Hadoop却烦于其过慢的OLAP性能,即在伯克利创造了In memory Hadoop.我惊叹于伯克利的大力出奇迹,在peer review下并没有任何novelty却能在三次拒稿后用Facebook的广泛应用拿到了最佳paper.需求和解决方案是驱动计算机科学进步的源泉,没有思考的idea,如果它work,真正的peer也会告诉你,I buy it.伯克利不是一个适合做科研的地方,sky computing、alpa、foundation model、ORAM、UCB哪一个不是已经有别人的paper的基础上,封装一个伯克利defined layer,重新实现一遍,然后鼓吹这个东西made in UC Berkeley?但是这是一种创新吗?是.因为这就是共产主义.所谓的老人,告诉你一个怎么走捷径进入四大,top20,但是他们难道就能发SOSP吗,还是他们就是劣币,一个听从老板的left over的idea,让你实现,从而发顶会 ?PhD的位置是可以guide工业界行为的工作,这种美好的探索性的阶段绝不能浪费.

什么东西guide我的日常科研进度,我觉得老板只是提供方向上和日常怎么填满生活的建议,从而达到能发出paper的目标,真正的决定权在我,我希望在武力攻台之前拿到eb1a绿卡,我希望在毕业的时候能拿800k的工作.在做到这个之前,我理解工业界的需求是什么,我期望在他们着眼于此之前就提供我能提供的建议,同时获得价值.OpenAI说这是AI的黄金时代,David Patterson说这是体系结构的黄金时代,Chris Lattner说这是编译器的黄金时代,我说这是软硬件协同设计的黄金时代!

Oblicious RAM survey

上了节Applied Cryptology的课。这老板和一切体系结构老师一样,非常push你做他想做的东西。大概密码学的研究,要不是一个全过程定义like CryptDB这样,或者弱化一个密码学的property优化性能,或者优化加密复杂度。一般都是找一个dummy implementation然后慢慢优化。

  1. More secure
  2. More Efficient
  3. More Expressive and functional
    1. Various query types (Boolean, point, range, join, group-by,...)
    2. Dynamic query workloads
    3. Specialized for various DB scenarios (relational, graph, array DBs, ...)

Cryptography Recap

Crypto Property

  • One-time pad, the easiest one is ⊕

    明文/密闻都是nbit,问题是传key太危险了。
  • RSA
  • Pseudo-Random Function
  • Privacy-Preserving
    • if (a==b) then Enc(a) = Enc(b) - Deterministic encryption
    • if (a<=b) then Enc(a) <= Enc(b) - Order preserving encrytion
  • CPA-security(Chosen-plaintext attack)
    • The CPA Indistinguishability Experiment $PrivK^{cpa}_{A,\pi}$ < perfect
  • Dynamic searchable encryption is CPA


- SDa
- High-level idea: Organize Nupdates in a collection of at most $log_2N$ independent encrypted indexes

Can Reduce from amortized to de-amortized by constructing dummy code and sorting and obliviously make it work.

  • Oblivious algorithm

    • The algorithm of the 1sort does not hurt the obvious property like Bitonic Sort takes $Nlog^2N$ cost
    • Oblivious sorted multimaps AVL Tree
  • MPC

  • TEE doesn't need every time download everything, but makes client inside the enclave and exchange data with outer devices. It can save transmission time.

  • ORAM makes the access pattern to the RAM not leaked by the attacker

  • Backward Privacy

CrytDB

用cyphertext对search scheme的(de)encrypt,可以想像就是死慢。有很多rebalance/重排操作优化

Searchable Encryption

怎么保证在search的过程中没有任何信息被leak,尤其是access pattern

我们可以对一个db操作的任何一个步骤都encrypt

Security 喜欢干的也是写模拟器。

需要保证的是一次search对ideal的访问操作的概率差在一个相关$\lambda$的可控范围内,有点像$\epsilon-N$定义。最naive的定义是PiBas模拟器来模拟SE。然后有个证明是用模拟器模拟和实际情况哦差别不大。



算法层面的定义key的encryption如上。

在模拟器里比较好测试算法的有效性。证明property like adaptive indistinguishability.

Whether construct all the key and pad to size of power of 2 will reduce the database result-size leakage?

What about adding random (key, value) pairs to $T$, so that the total number of elements is 2*N,

Optimal or Quasi-Optimal Search Time

Use ORAM Map + Backward privacy

I/O Efficient Searchable Encryption

Define Locality and Read Efficiency

Onechoice Allocation



Lower the Property that has a minor possibility of leaking the access pattern - Page Efficient SSE.

  • Uniform random document/tuple id reassignment
  • Compress the index
  • Encrypt the compressed index

Private Range Queries


Key Idea is to transform the range queries into point queries.



Leakage Abuse Attacks

这部分就是看清哪里可以abuse的。











基本思想是在你的ORAM放一个aside ORAM,用一个散列函数分成俩$^n$倍。






Can leak the private join operation

ORAM introduction

A simple ORAM using Shuffling

This Oblivious Shuffling basically tell use the mapping will cost O(log N) for storage O(1), O(N) for storage O(log N) and O(N log N) for storage(N1)


1

shuffle有很多variant, double buffer amortized square root shuffle and cubic。安全证明是对$\sqrt N$的操作寻找整个data slot的期望是正好的。所以buffer可以到$\sqrt N$。



每次读,除了第一层都用PRF $K_i$ 拿到结果,然后put to first。第i层每$2^i$次操作reshuffle一次。

Existing ORAM database

ObliDB

一个标准的TEE保护数据。

OBlix

这个DB是Rust写的。想法是把client 放在SGX里面,做到双向Olivious,即ORAM clients make data-independent accesses to client memory (within enclave),让server和client相互看不到access pattern。PATH ORAM的方法是把stash data的position map放在enclave client里,item放在不安全的binary tree上,通过trace拿到结果,这个操作是singly ORAM。


DORAM 说的是通过Block和fetched Block分离的方式读在server端的trace,
Read:get一次path替换dummy,添加block,添加dummy path到buckets里。
Write Back:

Snoopy

Opaque

Attacks cast on making ORAM non-malicious

这部分还是一个性能和安全的tradeoff

Reference

  1. https://people.eecs.berkeley.edu/~raluca/oblix.pdf
  2. https://arxiv.org/pdf/2106.09966.pdf
  3. https://www.cs.umd.edu/~jkatz/papers/sqoram.pdf
  4. https://cseweb.ucsd.edu//~cdcash/oram-slides.pdf
  5. https://keystone-enclave.org/open-source-enclaves-workshop/slides/OSEW19_RohitSinha_VisaResearch.pdf

Paper List

  1. RingORAM---Constants Count: Practical Improvements to Oblivious RAM
  2. Path Oblivious Heap: Optimal and Practical Oblivious Priority Queue
  3. Bucket Oblivious Sort: An Extremely Simple Oblivious Sort
  4. Fast Fully Oblivious Compaction and Shuffling
  5. Oblix: An Efficient Oblivious Search Index
  6. Snoopy: Surpassing the Scalability Bottleneck of Oblivious Storage
  7. Opaque: An Oblivious and Encrypted Distributed Analytics Platform
  8. Efficient Oblivious Database Joins
  9. Pancake: Frequency smoothing for encrypted data stores
  10. Snapshot-Oblivious RAMs: Sub-Logarithmic Efficiency for Short Transcripts
  11. SHORTSTACK: Distributed, Fault-tolerant, Oblivious Data Access
  12. Meltdown: Reading Kernel Memory from User Space
  13. Observing and Preventing Leakage in MapReduce ̊