BABA IS YOU (Walk Through)攻略 前言

前几天在看逍遥散人的视频的时候发现了一个很有意思的游戏。作为一个益智解密游戏可以说是非常新颖,可玩性极强。因此也不出意外的在steam上拿到了99%的好评率。很多时候这个游戏解密的时候都是靠一瞬间的灵光乍现。 因此我也希望自己能够把这些灵感记录下来,也为了那些之后想要享受这个游戏却不小心卡关的人。

 

Goodbye

Sometimes, saying goodbye is as easy and simple as a short phrase like “Gracias Manu” and “Menba Out”. Sometimes you miss the players not only for his play, but also for his role played in the entire childhood. I just feel like my childhood and youth has gone with the retirement of these figures, and it will never come back.

We need to learn to say goodbye, to these heroic figures, inspiring players, and our own memory.

 

Comment on C++

Lecture 1

  1.  #include is really useful when you want to use facilities that is not in the core language. include the library name you want to use inside the <> symbol after it.
  2. Cout is the command to display values.
  3. All the statement must end with ” ; “.
  4. Source code is the fundamental component of a computer program that is created by a programmer.
  5. C++ is a compiled language so you need to translate the source code in a file that the computer can execute. This file is generated by the compiler and is called the object code ( .obj )
  6. Source code is portable between two different platforms while object code cannot.
  7. Variables in C++:
    1. Need to be declared fist.
    2. Can be initialized when declared( with variablename(value)).
    3. There are several built-in type:
      1. int ,short int,long int, unsigned int, unsigned short int etc
      2. float, double, long double.
      3. char unsigned char ,bool
      4. enum  (B. Strousgrup , 2013, The C++programming language, fourth edition. pp50)
    4. One can implement his own type: e.g. class.
    5. The variable with const type must be initialized when it is declared and can not be modified later.
    6.  <iosmanip>:  The standard library provides manipulators corresponding to the various format states and state changes.
    7. <vector > contains the implementation of the standard-library vector container.
    8. <Algorithm> is a collection of routines for manipulating standard-library style containers
  8. Anything declared in a scope will be destroyed when it goes out of the scope.
  9. “break”  terminates the execution of the nearest enclosing loop.
  10. “continue”  Statement to jump to the beginning of the next iteration in the nearest enclosing while statement.
  11. The output of the function should also be declared.
  12. The separation of the condition for loop is made by notation” ; “, not “,”!!!
  13. In the single statement for scopes, the braces can be omitted.
  14. Vector operations.(Notice that the v.end() returns an iterator to the memory place one past the last element.)
  15. switch: a switch selects among a set of alternatives (with case label). The expression in the case labels must be a constant expression of integral or enumeration type.
  16. Passing arguments problem: in the function, the value of the variable is not used directly. Instead, they make a copy of the value in the variable and modify the value of it.
  17. One can use “& xx” for passing arguments.
  18. One can use ” const & xx ” for passing argument by const reference.

 

Lecture 2

  1. Creating objects:
    1. Global Variable: static storage.
    2. new operator is used to create an object that we want to store it in any circumstance. For example, if we want to store a variable create during the function. In general case, if we declare the variable in the function, it will be destroyed out of the scope. Hence if we want to store it, we can use the new operator to allocate memory for storing it. Delete operator is the corresponding way of destroying a variable.
  2.  The expression ” type & b = a” means to define the b as the reference to a. i.e. no matter what value a changes into, b also changes in to that vale. Otherwise b is just initialized with value of a.
  3. If add “&” notation between the type and arguments of the function, it means that instead of returning the value of the function, we return the reference of the return of the function.
  4. POINTER!!!!!!!!:
    1. Pointer to an object is the value representing the memory address of the object.
    2. In many cases, its cheaper to pass the pointer than the whole object.
    3. The object is accessible through its pointer. (This line is important as it tells the way how we can put a function as an argument of  the other function.)
    4. Creation of a pointer: Typename & *iptr. (Here typename can be any built in type or used defined type; iptr is just the name of the pointer.)
    5. Take the value of the address of an existing object: Typename *iptr  = & a.
    6. How to access the object through its pointer : Typename i = *iptr.
  5.  The memory on the heap must be freed using the operator deleted before the end of the program. (Notice the reason is that once a new variable is created, its memory space is “locked”, no one else can store things there unless a key is insert, i.e. delete operator.)
  6. One way to do this is to use the smart pointer “->”. The smart pointer can delete automatically without using the delete operator.
  7. Pointers to function: E.g. Typedef double (*Myfuntype)(double). This line defines a type named Myfuntype which is a type of function that takes a double and return a double.
  8. Namespace: With namespaces , one can group functions, classes, typedefs, under a name.
  9. There are different naming conventions, see lecture notes 3, page 20
  10. Classes: A class is defined  to have a set of members,which can be data, function or type members. It We distinguish between class and instance of a class. (For example in the example   on page 6 of the lecture notes 4. the number takes they type complex number is called the instance of the class ComplexNumber.
  11. The instance is const argument if the member is declared const.
  12. Functions declared within a class definition are called member functions and can be invoked only for a specific variable.
  13. Data Member:  It can be plain values, references, pointers and const version of these. Const and reference members must be declare at the construction of the instance.
  14. Constructor initialize instances of classes. (For example, You construct a class called complex number. Then if you want to define a variable with type complex number, you need to initialize that variable no matter you want to put a value in it or not. Hence you need a constructor to execute this action.
  15. Default Constructor: If initialization is not done explicitly, then compiler will call the default constructor of the data member.
  16. Default constructor takes no arguments.
  17. There is one example for the constructor in the Lecture 4 page 11. The line “Explicit ComplexNumber (double, double = 0.0)”. This gives an example of the constructor and the imaginary part has already been initialized while the real part is not. It means that when we only put one number into any variable with complexnumber type, it initialized the first part of the function.
  18. A copy constructor takes an instance of the same type and create a new instance from that.
  19. The Explicit protects against implicit type conversion.
  20. For the example in the page 14 , lecture 4. The 4th line is the default constructor with initializing both dRe_ and dIm_. The fifth line is to initialize the data member with the member provided by the arguments.
  21. It is needed to use the namespace when accessing the function members of the class for implementation.
  22. Overloaded Operator: one can define the operator for the instance of the class as they wish. This is called the overlaoded operator. Computer will detect the type of the input and execute corresponding operator.
  23. There is a special syntax for the declaration of the operator members:                           Classname & operator __ (const classname  &  ) [The underlined part should be filled with the operator such as “+”]
  24. One special operator is ().
  25. The operator  “=” with argument (const reference to) the user defined class is the copy assignment.
  26. The special operator “*this” has a function that is to return a reference to the instance it was called.

Lecture 3

  1. Template functions: It gives the function to be defined based on the input argument. So the compiler will detect the type of the input provided, and then give the type to the dummy variable used in the template title( The line that looks like template<typename T>, where T is a dummy variable.)
  2. If the argument of the function uniquely determines the template type argument , then there is no need to specify the template parameter at call.
  3. If there is only one parameter in the template, then taking several input may cause compiling error as the type that should be used is not clear.
  4. In C++, one can implement template class. One may give a default type in the template line i.e.template<typename T,typename Op = std::less<T>>. If no Op is not specified, then it will take the default type.
  5. We can also do templates with non -type arguments and specialization. For example. define template function with the integer n in order to calculate the fibonacci sequence.
  6. There is also a partial templates specialization:  sometimes the function behaves differently when one of the input is of specific type.
  7. Full specialization: the behaviour might be very specific for a a particular template argument list.
  8. A (singly) linked list consists of nodes. Each node 
    1. has some data
    2. has set and get method for this data
    3. knows about the next node
    4. cal tell if it has a next node
    5. has a get method for the next node

Comment on Monte Carlo Method

There is a general notice before the start of this comment: In numerical simulation, monte carlo method is useful for larger dimension, whereas finite difference method is more useful tan Monte Carlo Method in smaller dimension.

Lecture 1

  1. The generation of the random number includes three steps: Generating independent uniform on [0,1]; generating independent standard normal; generating correlated normal.
  2. The useful RNG is Mersenne Twister, which is 219937-1.
  3. Four popular method for random normal generators:
  4. Box-Muller: advantage: easy to understand; Disadvantage: log, cos and sin are quite expensive to calculate.
  5. Marsaglia Polar Method: Advantage: Not that expensive compared with Box Muller method. Disadvantage: Need to abandon some of the random numbers, so it is not that useful when parallel is used.
  6. Marsaglia zigguart method: Advantage: fastest. Disadvantage: hardest to understand.
  7. Inverse Normal Method: Advantage: As accurate as other method,, but still costly.
  8. The normal cdf is related to the error function erf(x): Φ (x) = 1/2+1/2 erf (x/ sqrt(2)).
  9. Two ways calculate the correlated random normal numbers:
  10. Cholesky factorization and PCA decomposition to find out the method for the doing

Lecture 2

  1. Integrating a function on the range of [0,1] is just like calculating the expectation of the function under uniform distribution. Hence the integration can be estimated through calculating the average of the function evaluated at random [0,1] uniform numbers.
  2. The estimator above is unbiased and consistent.
  3. The error is to subtract estimated value from the real value. Bias is the expectation of the error. Root Mean Square Error is square root of the expectation of error square.
  4. The empirical variance can be calculated via general procedure: sum of square minus square of sum. To get the unbiased estimator, one can times the empirical variance with the fraction (N-1)/N.
  5. To calculate the number of samples needed for the required accuracy: N = (σ s(c))/ε)^2.
  6. The root means square error can be regarded as the variance of the error.
  7. When d>4 Monte Carlo Method is much more useful than finite difference method.
  8. Similar way of simulating expectation with independent and correlated density random normal. Simply invert the normal distribution and take expectation on the invnorm and original function with regard to uniform distribution. It’s similar for the correlated random normal as only need times the independent random normal with the decomposed variance covariance matrix based on Cholesky or PCA Method.
  9. The decrease of the accuracy always implies the increase of the computing time. Hence there is a trade-off between the accuracy and efficiency.

Lecture 3

  1. Variance reduction is very important in Monte Carlo Method as one may apply simple method to reduce the variance in certain circumstance.
  2. There are six ways in variance reduction.
  3. Antithetic Method: Notice that this method can only be applied for the situation when the distribution function is and even function. Advantage : The variance is always reduced. Disadvantage: Disadvantage: The computational cost doubles. Hence net benefit only if the co-variance of f(w) and f(-w) is smaller than  0.
  4. Best case: linear payoff. Worst case: symmetric payoff.
  5. Control variate. If there is another payoff f for which we know the expectation, can use g-E(g) to reduce the error in f-E(f).
  6. The good situation is f and g are near linear correlated. Worst situation is that f and g are independent.
  7. Importance Sampling: The basic idea is change of probability measure.
  8. For the last sentence in the page 20 of this lecture, the choosing of μ2 which gives the distribution of the new sampling.
  9. For the normal distribution, change of mu can be useful when one part of the tail is important,while change of σ is useful when both tails are important.

 Lecture 4

  1. Stratified Sampling: The key idea is to achieve a more regular sampling of the most important dimension in the uncertainty.
  2. Procedure: divide the [0,1] interval into M strata ——> take L samples from each strata. ML = N i.e.total sample sizes.
  3. The procedure for this simulation: Break [0,1] in to M strata ——> Take L samples U with uniform probability distribution ——> Define independent random vector from invnorm and uniform samples——> compute average for each strata and overall average.
  4. There is a trade-off between efficiency and confidence.
  5. Notice that it is better to sample more from the strata with her variability.
  6. The multivariate application is similar.
  7.  For higher dimension, the number of cube to choose sample from can be quite large. This may forces the sample chosen from each cube can be quite small. Hence the  new method called Latin Hypercube is introduced.
  8.  Latin Hypercube: Generate M points dimension by dimension ,using sampling with 1 value per stratum, assigning them randomly to the M points to give precisely one point in each stratum. ——> Take L such independently generated set of points, each giving the same average.
  9. In the special case that the function can be written in the sum of one dimension function, then there will be a very large sample size reduction by using large sample size M.

Lecture 5

  1. Quasi Monte Carlo. Standard quasi monte carlo uses the same equal weight estimator but chooses the points systematically so that the estimate is biased, error roughly proportional to 1/N and there is no confidence interval.
  2. To construct the set of points we want to use for the quasi Monte Carlo method, there is one thing called Rank-1 Lattice Rule. (see notes page 9).z is the generating vector with integer components co-prime with N.
  3. Sobol sequence: The idea of the Sobol sequence is to subdivide each dimension with into halves ,quarters etc, and in each cube the number of the sample points are the same.
  4. randomized QMC: Using randomized QMC.
  5. QMC points have the porperty that the points are more uniformly distributed through the lowest dimension. Sonsequently, itis important to think about how the dimensions are allocated to the problem. Previously, have generated correlated Normals through the decomposition of the variance co-variance matrix.

Lecture 6

  1. Finite precision arithmetic: a floating point can be represented f = x × 2n  where n is the integer expoennt which is given by some number of bits. 1/2<|x|<1 also represented by some number of digits.
  2. relative error is about 10-16 for long and 10-7 for short.
  3. For the sum, the standard error for the sum is given by the 2-S sqrt(N). where N is the size of the sum.
  4. The error can be fatal when we want to simulate the differetiation.
  5. Complex trick: involving complex number may also gives the same result but less error. We only need to take the imaginary part of the function evaluate at point (x+ i dx)). Hence one can take dx small enough. The only issue is that the function should be analytic.

Lecture 7

  1. Greeks is a set of functions that measure the change of value of one derivative corresponding to change of one parameter.
  2. The error might be quite large if take random uniform vector for each X(θ + Δ θ) and X(θ – Δ θ).
  3. In order to solve this, we use same random input for both X(θ + Δ θ) and X(θ – Δ θ).
  4. Finite Difference Sensitivity  : There might be some issues when the payoff function is not continuous, hence we need to be very careful about the payoff jump at the non continuity point of the function.
  5. The probability for the payoff jumps at the interval [θ – Δ θ,θ + Δ θ] is O(Δ θ). With this, the variacne might get really large when the Δ θ is small.
  6. Hence what we want to minimise is the mean square error. And the best choice for Δ θ is N -1/5.
  7. For discontinuous second derivative, it will also makes the variance quite large.

Lecture 8

  1. Likelihood ratio method and path-wise sensitivity: In previous method we consider derivative of the density function with respect to the &theta; while the path-wise sensitivity take the derivative of the function that we want to take the expectation with.
  2. In the likelihood ratio method, we do not change the measure. We change the function we want to to take expectation with.
  3. For this method , the variance is very large when &sigma; is  small, and it is also large for &Delta; when T is small. (We are talking about estimating the price ).
  4. For path-wise sensitivity, we consider differentiate the function instead. But in this case, we need to assume that the function is differentiable. Similar thing can be applied to second order differentiation.
  5. For the discontinuous situation, we can use smooth function to approach non continuous function, and take the limit as the final result. E.g one example for smoothing is to use the cumulative normal to smooth the digital call function.
  6. The idea of these two methods are still based on simulating, hence we can calculate the expectation via the simulation method with respect to the function inside the bracket after the transformation.

Comment on Notes of Statistics and Financial Data Analysis

Lecture 1

Hypothesis Test:

  1. All the test are based on the idea that the null hypothesis is true.
  2. There are two ways of testing: check if the condition under null hypothesis is within the confidence interval or if the probability of the data occurs under the hull hypothesis is true.
  3. The t distribution with degree freedom n is one standard normal  rv divided by the the square root of the chi-square with degree freedom n.

Multiple Comparison

    1. Consider one linear model given in the notes Y= Xβ+ σε, then the comparison between two models is based on the ANOVA test, i.e. analyze the differences among group means in a sample.
    2. To compare the nested model, we use the F-test with statistic f = (RSS0 – RSS1)*m/(RSS1)*k. Where k is the number of the predictor excluded( &beta = 0); m  = n-k. Then f follows the distribution Fk,m.
  1. There is another method called the approximation f test which is used for testing none nested models (it can be somehow useful when the number of the sample are significant).
  2. To see the some ideas in on approximate F test.

Lecture 2

Polynomial Regression

  1. Polynomial regression is the regression method for the non linear effect.
  2. When the order of regression goes higher, the data may not look like a simple polynomial,  and this may cause huge edge effects (i.e., the polynomial may act significantly peculiar when the predictor tends to extreme value).

Piecewise Linear Approximation

  1. To overcome the side effect of the polynomial regression, we tend to use the piecewise linear approximation instead of the simple polynomial regression.
  2. The idea of doing it involves two things: first to choose the number of nodes we use in the  model, and then fit each section with linear model. Notice that there is restriction that the fitted model should be continuous in at all nodes. To see the detail of this, check continuous piecewise linear approximation (paper from MIT and a bit advanced).
  3. Can also check page 141 of the book Elementary Statitics Learning. for general peicewise polynomials.
  4. Notice that there are packages in R that can give the model which follows piecewise continuous linear regression.
  5. The rule of thumb for defining the node is to use the quantile of the predictor. Detailed in lecture notes lecture 2 page 1.

Spline Bases(Basis)

  1. The idea of  spline is to introduce the cubic polynomial other than linear regression. The reason why we choose the given basis is that it’s more convenient to use this basis. Notice that there is a command in R to calculate the splines of the given data.

Natrual Spline

  1. Things may still be peculiar when the predictor is extreme,  there is still one command in R that gives the natural splines for the given data.
  2. Advantage: Gain more degree of freedom, giving a better overall fit.
  3. For more detail: see HTF in the book ESL and an introduction to splines.

Approximation F test

  1. Approximation F test can be applied for two models that are not generally nested. There are several things needs to be careful: the amount of the data should be moderately large to get the a relative well performance in practice.
  2. In R it can be achieved by anova function.

Lecture 3

Information Criteria & Model Selection

  1. The AIC is introduced to select the model as it awards the increase of the probability and simultaneously penalize the overfitting. For small sample size, AICc is used rather than AIC.
  2. Another Criterion is Bayesian Information Criterion (BIC). It penalize more for the complex  model.
  3. Remark: For finite sample size, AIC gives better performance while that BIC gives a better performance when the sample size tends to infinity.
  4. There are also functions for AIC and BIC in R for the calculation.

Heteroscedasticity and Weighted Regression

  1. Heteroscedasticity means the variance of the predictor follows the function of the predictor. i.e. variance is not fixed.
  2. With the normal assumption of the error, we can find out that the weight can be calculated as the inverse of the function at the predictor times 1/2. Consequently, the residual sum of square can be calculated with the corresponding weight.
  3. Hence the weight of each prediction is related to the variance with the given predictor. Therefore the weighted matrix can be calculated by applying corresponding weight into the diagonal of the given matrix.
  4. The cook distance is used to identify the outlier of the given data. When D is greater than 0.5, it might influence the prediction and when D is greater than 1, it has highly possibility to be the outlier.

Lecture 4

Normality Testing

  1. QQ-plot: This is to check the normality through ploting sample quantile against the theoretical quantile. if it follows the normal distribution, the QQ-plot is usually a straight line. See QQ-plot
  2. For the formal tests: see Jarque-Bera Test and shapiro test .
  3. J-B test involves the kurtosis of the normal distribution.
  4. The procedure of the J-B test is: Standardize the data ——> calculate J——> reject large J.
  5. The procedure for the S-W test: Standardize the data ——> calculateB——> reject large B.

Risk Measurement

Value at Risk

  1. Notice that the L means the loss distribution as the larger the value is, the more the loss is. Hence there are two ways of defining the VaR, i.e. the value at risk. One depends on the loss and the other one relies on the profit = -loss. Hence the specification should be given that whether the given distribution is a loss distribution or profit distribution.
  2. There are four main issues in the definition of the value at risk: Model Risk, Liquidity Risk, Parameter Chosen Risk, Non- subadditive.

Expected Shortfall

  1. The introduction of ES is to solve the problem of non sub-additive.
  2. The meaning of the definition of Expected Shorfall is to calculate the average value at risk when the probability of loss occurs is smaller than the given parameter α.
  3. ES is sub-additive.
  4. The second definition of ES should take coefficient (1-α)-1 instead of the original one.

Lecture 5

Financial Return

  1. The return is defined as the price of today minus the price of yesterday divided by the price of yesterday.
  2. The annualized return simply use times return with 252(trading days every year)

Stationarity

  1. There are two types of stationarity: weak and strong.
    1. Weak stationarity: Just requires the first and second moment are the same and the autocovariance is just a function of τ .
    2. Strong  (strictly) stationarity involves the joint distribution of all time are the same regardless of the lag τ.
  2. Notice that the T in the page 2 can be regarded as the number of the data we have if we consider the time interval are the same for each data we obtain.

PACF

  1. To calculate pacf at time k , we regress the autocorrelation at time k-1 with the previous time and then calculate the predicted auto correlation at time k other than calculating the autocorrelation directly from the data.
  2. The matrix W can be obtained from the formula in the line under the theorem 1. The diagonal element of the matrix can also be obtained via the formula of whh. and as the result of multivariate normality and the variance, we can test the normality of if it follows the normal distribution.
  3. Test for the autocorrelation is  :
    1. Ljung–Box portmanteau test. (The distribution used is the chi-square distribution.)

Lecture 6

  1. Estimation of AR1 method:
    1. Method of moment.
    2. MLE method.
  2. Forecasting the AR1: it is given by the ways of writing xm+t recursively to  xt

Lecture 7

  1. AR(p) and MA(q): it generalises AR model and MA model. If involved with backshift operator, we can write the model as Φ(B)Xt  = Θ(B)Xt   .
  2. For the statinarity, the root for the Φ(B) should all be outside the unit circle.
  3. The autocorrelation of the MA(q) model can be calculated directly.
  4.  X is causal if and only if φ has no zeros inside the closed complex unit circle.
  5. X is invertible if and only if θ has no zeros inside the closed complex unit circle.
  6. Response to shock:  For an AR process, we generally have
    that ψi collapses exponentially quickly. For a pure MA process, ψi
    is zero for large enough.

Lecture 8

  1. Fitting AR(p):
    1. regressive method.: ) A basic approach to calibrating an ARMA model is to first fit a long autoregressive model to the data. This allows estimation of the innovations via residuals.
    2. Yule–Walker equations.:
    3. MLE method: it may be quite slow with large dataset. see page 3 of the lecture 8 notes.
  2. Diagnostic: Key are to ensure that there is no further relationship between the residuals from the fitted model and the predictors, and that a normal approximation is appropriate.

记TI8决赛

作为一个dota2 爱好者,最近几年的TI总是一场不会落下。作为竞技体育,每一年都会有人欢笑,有人落泪。但是今年的TI赛场上的LGD失败的泪水,却带给我一种更难以言状的悲伤。

纵观这五场比赛,除了第一场的bp,剩下4场比赛LGD都是有机会拿下的,但是很多时候,他们不是输给了OG,而是输给了自己。AME的上头送,Maybe的切入问题,Chalice肉山团回家。这一系列的问题导致他们输掉了整个系列赛,留给中国观众五个黯然的背影。

这个结果也让我心里难受了一整天。倘若单论每个选手的硬实力,LGD是更胜OG一筹的,但是唯一的问题就是在于,他们每一个人的优秀,使得他们对自己无比自信,没有一个合适的领导者。之前Chalice在采访里也间接表示了,LGD并没有一位非常明确队长,可能缺乏整体领导才导致了一系列致命失误吧。

再次回首这一个系列赛,我仿佛看到了13年得马刺,曾经离总冠军是那么的近,却最终还是阴差阳错输掉了比赛。Ana仿佛就是投进关键一球得雷阿伦,带领OG拿下了冠军。也正因如此,我相信LGD一定会像14年的马刺一样,把自己失去的东西,重新拿回来。希望明年,Maybe, Fy, Ame,Chalice,Xnova可以振作起来,再一次撑起中国dota的大旗,在上海拿下TI9。

顾此及彼,很多时候,我是不是也会轻视了对自己的审视呢?外来的压力使得自己将注意力全部集中在外部,这也便缺少了对于自我和内心的审视。长此以往,自己便会迷失自我,迷失在外界的洪流中。

希望LGD能够从内部开始审视自我,也希望我自己能够重新审视自己,自己对于家人,朋友;对待学业,事业,是不是有什么做的不妥的地方,这样才能够在成功的道路上走得更远吧。

Anime List (Updating)

This article is about the anime I watched and want to recommend to you. I am not an enthusiastic anime fan but I will try to give all what I have to give you some suggestions. I will update this page from time to time since I do not have too much time to watch anime.

In addition, sometimes I will also write some reviews about an anime I watched and give some personal opinion on the story, characters and thoughts that creators want to convey. (Due to language reason, review will probable be written in mandarin. For English readers, I think I will try to find proper translation for these reviews later)

Anime Review

For a good anime, I will write an article in order to discuss the story, character and thoughts contained in it. Due to my limited english capability, I will write all these articles in Chinese. If you are really interested in one of the articles, I will try translating them into English. In addition, I will post all links to the reviews here for your convenience.

 

 

Old Anime:

  1. Fate/Zero
  2. Fate/Stay night
  3. A certain magical Index
  4. A certain scientific railgun
  5. Re:0
  6. Evangelion
  7. Gundam Seed/ Destiny
  8. Your lie in April
  9. Tsuki ga Kirei
  10. Tsuredure Children
  11. Monthly Girls’ Nozaki-kun
  12. Stein’s Gate
  13. Mob Psycho 100

 

2018/01

  1. Darling in Franxx
  2. Violet Evergarden
  3. Cardcaptor Sakura Clear Card
  4. Karakai Jozu no Takagi-san
  5. The Disastrous Life of Saiki K
  6. The Ryuo’s Work is Never Done!