UCLID5: Integrating Modeling, Verification, Synthesis, and Learning

When I was browsing keystones‘ paper, they are applying a verification model called UCLID5, a new software toolkit based on z3 written in scala for the formal modeling, specification, verification, and synthesis of computational systems. The augmented efforts are merely making architectural implementations like NAPOT/S mode world change/exceptions in pmp reg mapping policy to Hoare logic and do the verification. To a greater extent, the Y86-64 introduced in CSAPP is verified by UCLID5, too.

The Goals

  1. Enabling modular modeling of finite and infinite-state transition systems across a range of concurrency models and background logical theories.
  2. Performing a highly-automated, compositional verification of a diverse class of specifications, including pre/post conditions, assertions, invariant, LTL, refinement and hyperproperties
  3. Integrating modeling and verification with algorithmic synthesis and learning.

threat models

SRE

Heartbleed-like attack, side channels, and Meltdown/Spectre are also cast a lot on SGX/TEE. Stuff like secure remote extensions using trusted platforms is introduced. They formalized the enclave platform and made remote execution trustworthy.

embodying enclave and adversary






UCLID5 verifier


Modules, like LLVM, serve as the unit of re-use when declaring commonly used types, symbolic constants, and uninterpreted functions. The module uses uninterpreted functions to model instruction decoding (inst2op, inst2reg0 etc.) and instruction execution (aluOp and nextPC). Further, the register file index type (regindex_t) and the address type (addr_t) are both uninterpreted types. This abstraction is sufficient to model all kinds of CPU implementation.

Structure of the CPU Model:

  1. Module common: serves as top-level type definition.
  2. Module cpu: defining the initial sate like reg/pc/imem/dmem, types are mode_t(isolated/normal) word_t addr_t(a range of addresses between isolated_rng_lo and isolated_rng_hi) init/next/exec_inst defines the transition system.
  3. Module main: the environmental assumptions for the verification and contains a proof script that proves, via induction, a 2-safety integrity property for the CPU module.

Other features

  1. Term-Level Abstraction: In term-level abstraction, concrete functions or functional blocks are replaced by uninterpreted functions or partially-interpreted functions. Concrete low-level data types such as Booleans, bit-vectors, and finite-precision integers are abstracted away by uninterpreted symbols or more abstract data types (such as the unbounded integers).
  2. Blending Sequential and Concurrent System Modeling:
  3. Diversity in Specification
    1. Procedure specifications.
      1. Invariant
      2. while
      3. assume
      4. assert
    2. LTL
    3. Hyperproperties(k-safety): composing k copies of the system together and specifying a safety property on the resulting composition
  4. Modularity in Modeling and Specification
  5. Counterexample Generation
  6. Modular Proof Scripting

More for extended reading

  1. https://www.hillelwayne.com/post/hyperproperties/https://www.cs.unc.edu/~csturton/courses/verifiedsec/lecturenotessp17/notes_lbarnett.pdf

[Program Analysis] Soundiness and soundness

In defense to a situation where no program analysis in the program is soundness, so given term soundness for that program claims that they can cover most cases.

Reflection in Java

First on Java reflection: Class + Method + Field Metaobject

  1. One solution: String Constant analysis + Pointer Analysis
  2. List and Array propagation

JNI Call

  1. One Solution: Transcode from C to Java
  2. scanning on the binary

[A Primer on Memory Persistency] Persistent Memories阅读笔记

本篇是《A Primer on Memory Persistency》这本书的中文阅读笔记,会整理出我认为重要的部分。当年看《A Primer on Memory Consistency and Cache Coherence》的时候,被其对内存的描述惊艳到了,也是我对内存提起兴趣的最重要的原因。上CA2的时候对这几本书翻了又翻。

Continue reading "[A Primer on Memory Persistency] Persistent Memories阅读笔记"

[Program Analysis] Static Analysis for Security

Like symbolic execution, Taint-Analysis is an important tool for analyzing code security vulnerabilities and detecting attack methods. Taint-Analysis can effectively detect a large number of security vulnerabilities in Web applications, such as cross-site scripting attacks, SQL injection, etc. The taint propagation technique is an important topic in the field of Taint-Analysis, which is combined with the static program analysis technique to obtain more efficient and accurate analysis results by analyzing the interdependencies between program variables without running the code and modifying the code.

Taint analysis tracks how tainted data flow through the program and observes if they can flow to locations of interest (call sinks).

[Program Analysis] Inter-procedural Dataflow Analysis

The IFDS published at POPL'1995 is a leaf-frog jump after the classical intra-procedural analysis. Since it 1. linear time for specific individual problem 2. linear time for the locally separable problem(gen/kill bitvec) 3. Nonlinear time for other common problems.

CFL reachability

Feasible and Realizable Paths

Paths in CFG that do not correspond to actual executions is Infeasible paths. If we can prevent CFG from being contaminated by infeasible paths as much as possible and make the analysis flow graph edges more concise, we are bound to greatly improve the analysis speed. But in practice, such infeasible paths are often undecidable.

Realizable Paths

The paths in which "returns" are matched with corresponding "calls".Thus we can make the target wider. If we call one path of, the returned edge can correctly pattern matching the call-edge, we call the

B is CFL reachable from A if there's a path (a series of edges) between two nodes A and B and all the label of the edges on this path are legal words defined by a pre-specified context-free language.

Partially Balanced-Parenthesis Problem

The search for realizable paths is abstracted into the typical bracket matching problem. Partially here means that there must be an (i match for (i. In other words, for every return-edge there is a call-edge match, but every call-edge does not necessarily have a return value (e.g., realizable but not infeasiable paths). The problem is modeled as follows: all edges on the control-flow graph are given a label, and for a call statement i, the call-edge is labeled (i and its return-edge is labeled) i, while all other edges are labeled e.

A path is a realizable path if thee path' word is in the language L(realizable).

IFDS

IFDS is for interprocedural data flow analysis with distributive flow functions over finite domains. It transforms a large class of data flow analysis problems into graph reachability problems by providing polynomial-time accuracy compared to the iterative data flow framework with the following constraints:

  1. Many general dataflow analysis problems are subset problems but interprocedural is full program analysis. The distributive flow function within the finite domain is required for IFDS.
MRP

IFDS uses meet-over-all-realizable-path

to measure its analysis accuracy.
According to the definition of $M O P_{n}=\sqcup(p \in P a t h s(s t a r t, n)) p f_{p}(\perp)$, for each node point $n, M O P_{n}$ denotes the union/intersection operation of all paths from the start point to the node point n on the CFG.
And according to the definition of MRP $M O P_{n}=\sqcup(p \in P a t h s(s t a r t, n)) p f_{p}(\perp)$ , for each node point $n, M R P_{n}$ denotes all realizable paths from the start point to the node point n on the CFG (the label of these paths constitutes the word that matches the realizable label of these paths conforms to the context-independent grammar of the realizalble language) for the union/intersection operation. The result is also more accurate than $M O P_{n}$. $M R P_{n} \leqslant M O P_{n}$.

Algorithm

IFDS Algorithm:

Input: Program and the Data Flow Analysis $O$

Reference

  1. https://haotianmichael.github.io/2021/06/06/NJU%E9%9D%99%E6%80%81%E7%A8%8B%E5%BA%8F%E5%88%86%E6%9E%90-5-IFDS/
  2. https://apps.dtic.mil/sti/pdfs/ADA332493.pdf
  3. https://minds.wisconsin.edu/bitstream/1793/60188/1/TR1386.pdf

[Program Analysis] Intra-procedural Dataflow Analysis

The global non-related optimization is based on Dataflow Analysis. The mainstream compiler intermediate representation like LLVM/ Gravvm is based on SSA to do the Dataflow Analysis while Soot on JVM is utilizing Lattice (The classical definition) to do the intraprocedural.

Reference

  1. https://haotianmichael.github.io/2021/05/04/NJU%E9%9D%99%E6%80%81%E7%A8%8B%E5%BA%8F%E5%88%86%E6%9E%90-1-Data-Flow-Analysis/

Some pitfalls using docker to host Nginx and php-fpm

I'm deploying my WordPress on AWS with Lambda function, all the services are coerced in one container. (I was intended to separate into different services, e.g. MySQL, Nginx, lambda-php-server, but not necessary) And I would get into 504 for Nginx.

My frontend Nginx do a proxy_pass to the backend Lambda function which always shows timeout. I was to think the performance of the chassis is bad, or some cgroup stuff is cast on the service. The log from behind Nginx shows there's setrlimit(RLIMIT_NOFILE, 51200) failed (1: Operation not permitted) I realize I have to give privilege to the Docker runtime. It will setrlimit to the host machine.

docker run -c 1024 --blkio-weight 600  -d -e https_proxy=http://172.17.0.1:1082 -e http_proxy=http://172.17.0.1:1082 -p 8022:22  -p 8443:443 -p 888:888 -p 3306:3306 -p 8888:8888 yangyw12345/wordpress_mainpage sh -c "while true;do echo hello world; sleep 100000;done"

[Program Analysis] LiveVariableAnalysis in Soot

Setup of the Soot

We need to first set up the Soot variables. In target command line tools, the config is passed by args, e.g. java -cp soot-2.5.0.jar soot.Main -pp -f J -cp . Hello. The prepend file is set and the jar interpreter will pass the variable $CLASSPATH. In my implementation, these are passed into Options.v(). Noticeably, I have to switch off the optimization for java ByteCode, or it will do Dead Code Elimination to remove the intermediate process.

CallGraph for Debug

I iterate the CFG before and after the Liveness analysis to see the problem.

for (Unit unit : body.getUnits()) {
       System.out.println(unit.toString() + "[entry]" +lva.getFlowBefore(unit));
       System.out.println(unit.toString() + "[exit]" +lva.getFlowBefore(unit));
}

LiveVariables

LiveVariableFlowSet

I record all the FlowSet<Local> to a enhanced AbstractBoundedFlowSet<Local> with liveVariable hash set. So that it has clone and record feature.

BackwardFlowAnalysis

The Analysis extends BackwardFlow. The merge logic is union and copy logic is copy. The flowThrough part is first calculates the kill set and makes the difference of kills and destSet; then the gens are merely the union of the iterated kills.

Input and output of file

The soot.Main only accept className, so I remove the "./" and ".class" to pass. And the directory is created in the same folder of the class file to write the output.

Processing-in-Memory Course: Programming PIM Architectures by Onur & Juan

The course provides a software view of PIM Architecture like Intel does in its "Intel Manual for software people".

Also, DRAM Processing Unit is a long discussed topic to utilize the DRAM bandwidth.

UPMEM provides processing in DRAM Engine.

The introduction of PIM is to utilize large memory bandwidth and save cycles of on-chip memory compute.

Setups are DDR R-DIMM with 8GB+128DPUs and 2x-nm DRAM process.

还是有个指令集把代码打到对面的

Reference

  1. https://www.youtube.com/watch?v=RaOIoOQ5EgE
  2. ^ ServeTheHome (2021-05-30). "DPU vs SmarNICs vs Exotic FPGAs". ServeTheHome. Retrieved 2022-01-03.