# Recent Content

## The project

I try to avoid publishing my code solving homework assignments, but this Udacity SDC project is generic enough to be useful in a wider context. So here you have it. The task was to fuse together radar and lidar measurements using two kinds of Kalman Filters to estimate the trajectory of a moving bicycle. The unscented filter uses the CTRV model tracking the position, speed, yaw, and yaw rate, whereas the extended filter uses the constant velocity model.

The Unscented Filter result

Both algorithms performed well, with the CTRV model predicting the velocity significantly better. The values below are RMSEs of the prediction against the ground truth. The first two values represent the position, the last two - the velocity.

Extended filter:

``````]==>  ./ExtendedKF  ../data/obj_pose-laser-radar-synthetic-input.txt ../src/ekf.txt
Accuracy - RMSE:
0.0973826
0.0853209
0.441738
0.453757
``````

Unscented filter:

``````]==>  ./UnscentedKF  ../data/obj_pose-laser-radar-synthetic-input.txt ../src/ukf.txt
Accuracy - RMSE:
0.0659867
0.0811041
0.277747
0.166186
``````

## The code

I wrote a handy library that does most of the math and provides various concrete implementations of Kalman predictors and updaters:

``` 1 class KalmanPredictor {
2   public:
3     virtual ~KalmanPredictor() {}
4     virtual void Predict(KalmanState &state, uint64_t dt) = 0;
5 };
6
7 class KalmanUpdater {
8   public:
9     virtual ~KalmanUpdater() {}
10     virtual void Update(KalmanState           &state,
11                         const Eigen::VectorXd &z) = 0;
12 };
```

The code you need to implement yourself depends on the sensor, the model, and the type of the filter you use. Ie., for the CTRV model and a Lidar measurement you only need to specify the projection matrix and the sensor noise covariance:

``` 1 class LidarUpdater: public LinearKalmanUpdater {
2   public:
3     LidarUpdater() {
4       H_ = MatrixXd(2, 5);
5       H_ << 1, 0, 0, 0, 0,
6             0, 1, 0, 0, 0;
7
8       R_ = MatrixXd(2, 2);
9       R_ << 0.0225,      0,
10             0,      0.0225;
11     }
12 };
```

See here and here for more examples. The state travels around in an object of the KalmanState class:

``` 1 struct KalmanState {
2   KalmanState(int n) {
3     x = Eigen::VectorXd(n);
4     P = Eigen::MatrixXd(n, n);
5     x.fill(0.0);
6     P.fill(0.0);
7   }
8   Eigen::VectorXd x;              // mean
9   Eigen::MatrixXd P;              // covariance
10   Eigen::MatrixXd sigma_points;   // sigma points
11   double          nis = 0;        // Normalized Innovation Squared
12 };
```

All this ends up with the measurement update code boiling down to this:

```1 double dt = measurement.timestamp - previous_timestamp_;
2 previous_timestamp_ = measurement.timestamp;
3 predictor_->Predict(state_, dt);
4 updaters_[measurement.sensor_type]->Update(state_, measurement.data);
5 return state_;
```

See the full code on GitHub.

## Board bring-up

I started playing with the FRDM-K64F board recently. I want to use it as a base for a bunch of hobby projects. The start-up code is not that different from the one for Tiva, which I describe here - it's the same Cortex-M4 architecture after all. Two additional things need to be taken care of, though: flash security and the COP watchdog.

The K64F MCU restricts external access to a bunch of resources by default. It's a great feature if you want to ship a product, but it makes debugging impossible. The Flash Configuration Field (see section 29.3.1 of the datasheet) defines the default security and boot settings.

``` 1 static const struct {
2   uint8_t backdor_key[8];   // backdor key
3   uint8_t fprot[4];         // program flash protection (FPROT{0-3})
4   uint8_t fsec;             // flash security (FSEC)
5   uint8_t fopt;             // flash nonvolatile option (FOPT)
6   uint8_t feprot;           // EEPROM protection (FEPROT)
7   uint8_t fdprot;           // data flash protection (FDPROT)
8 } fcf  __attribute__ ((section (".fcf"))) = {
9   {0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
10   {0xff, 0xff, 0xff, 0xff}, // disable flash program protection
11   0x02,                     // disable flash security
12   0x01,                     // disable low-power boot (section 6.3.3)
13   0x00,
14   0x00
15 };
```

If flash protection (the `fprot` field) is not disabled, you won't be able to flash new code by copying it to the MBED partition and will have to run mass erase from OpenOCD every time:

```interface cmsis-dap
set CHIPNAME k60
source [find target/kx.cfg]
init
kinetis mdm mass_erase
```

If the MCU is in the secured state (the `fsec` field), the debugger will have no access to memory.

The structure listed above needs to end up in flash just after the interrupt vector. I use the linker script to make sure it happens. I define the appropriate memory block:

```FLASH-FCF  (rx)  : ORIGIN = 0x00000400, LENGTH = 0x00000010
```

And then put the `.fcf` section in it:

```.fcf :
{
KEEP(*(.fcf))
} > FLASH-FCF
```

See here.

I also disable the COP (computer operates properly) watchdog which resets the MCU if it is not serviced often enough.

```1 WDOG_UNLOCK = 0xc520;        // unlock magic #1
2 WDOG_UNLOCK = 0xd928;        // unlock magic #2
3 for(int i = 0; i < 2; ++i);  // delay a couple of cycles
4 WDOG_STCTRLH &= ~0x0001;     // disable the watchdog
```

You can get the template code at GitHub.

## The challenge

Here's another cool project I have done as a part of the Udacity's self-driving car program. There were two problems solve. The first one was to find the lane lines and compute some of their properties. The second one was to detect and draw bounding boxes around nearby vehicles. Here's the result I got:

Detecting lane lines and vehicules

## Detecting lanes

The first thing I do after correcting for camera lens distortion is applying a combination of Sobel operators and color thresholding to get an image of edges. This operation makes lines more pronounced and therefore much easier to detect.

Edges

I then get a birds-eye view of the scene by applying a perspective transform and produce a histogram of all the white pixels located in the bottom half of the image. The peaks in this histogram indicate the presence of mostly vertical lines, which is what we're looking for. I detect all these lines by using a sliding window search. I start at the bottom of the image and move towards the top adjusting the horizontal position of each successive window to the average of the x coordinate of all the pixels contained in the previous one. Finally, I fit a parabola to all these pixels. Out of all the candidates detected this way, I select a pair that is the closest to being parallel and is roughly in the place where a lane line would be expected.

The orange area in the picture below visualizes the histogram, and the red boxes with blue numbers in them indicate the positions of the peaks found by the `find_peaks_cwt` function from scipy.

Bird's eye view - histogram search

Once I have found the lanes in one video frame, locating them in the next one is much simpler - their position did not change by very much. I just take all the pixels from a close vicinity of the previous detection and fit a new polynomial to them. The green area in the image below denotes the search range, and the blue lines are the newly fitted polynomials.

Bird's eye view - vicinity search

I then use the equations of the parabolas to calculate the curvature. The program that produced the video above uses cross-frame averaging to make the lines smoother and to vet new detections in successive video frames.

## Vehicle detection

I detect cars by dividing the image into a bunch of overlapping tiles of varying sizes and running each tile through a classifier to check if it contains a car or a fraction of a car. In this particular solution, I used a linear support vector machine (`LinearSVC` from `sklearn`). I also wrapped it in a `CalibratedClassifierCV` to get a measure of confidence. I rejected predictions of cars that were less than 85% certain. The classifier trained on data harvested from the GTI, KITTI, and Udacity datasets from which I collected around 25 times more background samples than cars to limit the occurrences of false-positive detections.

As far as image features are concerned, I use only Histograms of Oriented Gradients with parameters that are essentially the same as the ones presented in this paper dealing with detection of humans. I used OpenCV's `HOGDescriptor` to extract the HOGs. The reason for this is that it can compute the gradients taking into account all of the color channels. See here. It is the capability that other libraries typically lack limiting you to a form of grayscale. The training set consists of roughly 2M images of 64 by 64 pixels.

Tiles containing cars

Since the samples the classifier trains on contain pictures of fractions of cars, the same car is usually detected multiple times in overlapping tiles. Also, the types of background differ quite a bit, and it's hard to find images of all the possible things that are not cars. Therefore false-positives are quite frequent. To combat these problems, I use heat maps that are averaged across five video frames. Every pixel that has less than three detections on average per frame is rejected as a false positive.

Heat map

I then use OpenCV's `connectedComponentsWithStats` to find connected components and get centroids and bounding boxes for the detections. The centroids are used to track the objects across frames and smooth the bounding boxes by averaging them with 12 previous frames. To further reject false-positives, an object needs to be classified as a car in at least 6 out of 12 consecutive frames.

## Conclusions

The topic is pretty fascinating and the results I got could be significantly improved by:

• employing smarter sliding window algorithms (i.e., having momentum) to better detect dashed lines that are substantially curved
• finding better ways to do perspective transforms
• using a better classifier for cars (a deep neural network perhaps)
• using techniques like YOLO
• using something smarter than strongly connected components to distinguish overlapping detections of different vehicles - mean shift clustering comes to mind
• making performance improvements here and there (use C++, parallelize video processing and so on)

I learned a lot of computer vision techniques and had plenty of fun doing this project. I also spent a lot of time reading the code of OpenCV. It has a lot of great tutorials, but its API documentation is lacking.

## The project

A neural network learned how to drive a car by observing how I do it! :) I must say that it's one of the coolest projects that I have ever done. Udacity provided a simulator program where you had to drive a car for a while on two tracks to collect training data. Each sample consisted of a steering angle and images from three front-facing cameras.

The view from the cameras

Then, in the autonomous driving mode, you are given an image from the central camera and must send back an appropriate steering angle, such that the car does not go off-track.

An elegant solution to this problem was described in a paper by nVidia from April 2016. I managed to replicate it in the simulator. Not without issues, though. The key takeaways for me were:

• The importance of making sure that the training data sample is balanced. That is, making sure that some categories of steering angles are not over-represented.
• The importance of randomly jittering the input images. To quote another paper: "ConvNets architectures have built-in invariance to small translations, scaling and rotations. When a dataset does not naturally contain those deformations, adding them synthetically will yield more robust learning to potential deformations in the test set."
• Not over-using dropout.

The model needed to train for 35 epochs. Each epoch consisted of 24 batches of 2048 images with on-the-fly jittering. It took 104 seconds to process one epoch on Amazon's p2.xlarge instance and 826 seconds to do the same thing on my laptop. What took an hour on a Tesla K80 GPU would have taken my laptop over 8 hours.

## Results

Below are some sample results. The driving is not very smooth, but I blame that on myself not being a good driving model ;) The second track is especially interesting, because it differs from the one that the network was trained on. Interestingly enough, a MacBook Air did no have enough juice to run both the simulator and the model, even though the model is fairly small. I ended up having to create an ssh tunnel to my Linux laptop.

Track #1

Track #2

## Intro

Writing this blog became increasingly tedious over time. The reason for this was the slowness of the rendering tool I use - coleslaw. It seemed to work well for other people, though, so I decided to investigate what I am doing wrong. The problem came from the fact that the code coloring implementation (which I co-wrote) spawned a Python process every time it received a code block to handle. The coloring itself was fast. Starting and stopping Python every time was the cause of the issue. A solution for this malady is fairly simple. You keep the Python process running at all times and communicate with it via standard IO.

Surprisingly enough, I could not find an easy and convenient way to do it. The dominant paradigm of `uiop:run-program` seems to be spawn-process-close, and it does not allow for easy access to the actual streams. `sb-ext:run-program` does hand me the stream objects that I need, but it's not portable. While reading the code of uiop trying to figure out how to extract the stream objects from `run-program`, I accidentally discovered `uiop:launch-program` which does exactly what I need in a portable manner. It was implemented in asdf-3.1.7.39 released on Dec 1st, 2016 (a month and a half ago!). This post is meant as a piece of documentation that can be indexed by search engines to help spread my happy discovery. :)

## Python

The Python code reads commands from standard input and writes the responses to standard output. Both, commands and response headers are followed by newlines and an optional payload.

The commands are:

• `exit` - what it does is self-evident
• `pygmentize|len|lang[|opts]`:
• `len` is the length of the code snippet
• `lang` is the language to colorize
• optional parameter `opts` is the configuration of the HTML formatter
• after the newline, `len` utf-8 characters of the code block need to follow

There's only one response: `colorized|len`, followed by a newline and `len` utf-8 characters of the colorized code as an HTML snippet.

Python's automatic inference of standard IO's encoding is still pretty messed up, even in Python 3. It's a good idea to create wrapper objects and interact only with them:

```1 input  = io.TextIOWrapper(sys.stdin.buffer,  encoding='utf-8')
2 output = io.TextIOWrapper(sys.stdout.buffer, encoding='utf-8')
```

Printing diagnostic messages to standard error output is useful for debugging:

```1 def eprint(*args, **kwargs):
2     print(*args, file=sys.stderr, **kwargs)
```

## Lisp

OK, I have a python script that does the coloring. Before I can use it, I need to tell ASDF about it and locate where it is in the filesystem. The former is done by using the `:static-file` qualifier in the `:components` list. The latter is a bit more complicated. Since the file's location is known relative to the lisp file it will be used with, it's doable.

```1 (defvar *pygmentize-path*
2   (merge-pathnames "pygmentize.py"
4   "Path to the pygmentize script")
```

The trick here is to use `#.` to execute the statement at read-time. You can see the full explanation here.

With that out of the way, I can start the renderer with:

```1 (defmethod start-concrete-renderer ((renderer (eql :pygments)))
2   (setf *pygmentize-process* (uiop:launch-program
3                               (list *python-command*
4                                     (namestring *pygmentize-path*))
5                               :input :stream
6                               :output :stream)))
```

For debugging purposes, it's useful to add `:error-output "/tmp/debug"`, so that the diagnostics do not get eaten up by `/dev/null`.

To stop the process, we send it the `exit` command, flush the stream, and wait until the process dies:

```1 (defmethod stop-concrete-renderer ((renderer (eql :pygments)))
2   (write-line "exit" (process-info-input *pygmentize-process*))
3   (force-output  (process-info-input *pygmentize-process*))
4   (wait-process *pygmentize-process*))
```

The Lisp part of the colorizer sends the `pygmentize` command together with the code snippet to Python and receives the colorized HTML:

``` 1 (defun pygmentize-code (lang params code)
2   (let ((proc-input (process-info-input *pygmentize-process*))
3         (proc-output (process-info-output *pygmentize-process*)))
4     (write-line (format nil "pygmentize|~a|~a~@[|~a~]"
5                         (length code) lang params)
6                 proc-input)
7     (write-string code proc-input)
8     (force-output proc-input)
9     (let ((nchars (parse-integer
10                    (nth 1
12       (coerce (loop repeat nchars
13                  for x = (read-char proc-output)
14                  collect x)
15               'string))))
```

See the entire pull request here.

## Stats

I was able to get down from well over a minute to less that three seconds with the time it takes to generate this blog.

``````]==> time ./coleslaw-old.x /path/to/blog/
./coleslaw-old.x /path/to/blog/  66.40s user 6.19s system 98% cpu 1:13.55 total
]==> time ./coleslaw-new-no-renderer.x /path/to/blog/
./coleslaw-new-no-renderer.x /path/to/blog/  65.50s user 6.03s system 98% cpu 1:12.53 total
]==> time ./coleslaw-new-renderer.x /path/to/blog/
./coleslaw-new-renderer.x /path/to/blog/  2.78s user 0.27s system 106% cpu 2.849 total
``````
• `coleslaw-old.x` is the original code
• `coleslaw-new-no-renderer.x` starts and stops the renderer with every code snippet
• `coleslaw-new-renderer.x` starts the renderer beforehand and stops it after all the job is done

## The classifier

I have built a road sign classifier recently as an assignment for one of the online courses I participate in. The particulars of the implementation are unimportant. It suffices to say that it's a variation on the solution found in this paper by Sermanet and LeCun and operates on the same data set of German road signs. The solution has around 4.3M trainable parameters, and there are around 300k training (after augmentation), 40k validation (after augmentation), and 12k testing samples. The classifier reached the testing accuracy of 98.67%, which is just about human performance. That's not bad.

The thing that I want to share the most is not all mentioned above, but the training benchmarks. I tested it on three different machines in 5 configurations in total:

• x1-cpu: My laptop, four i7-6600U CPU cores at 2.60GHz each and 4MB cache, 16GB RAM
• g2.8-cpu: Amazon's g2.8xlarge instance, 32 Xeon E5-2670 CPU cores at 2.60GHz each with 20MB cache, 60GB RAM
• g2.2-cpu: Amazon's g2.2xlarge instance, 8 Xeon E5-2670 CPU cores at 2.60GHz each with 20MB cache, 15GB RAM
• g2.8-gpu: The same as g2.8-cpu but used the 4 GRID K520 GPUs
• g2.2-gpu: The same as g2.2-cpu but used the 1 GRID K520 GPU
• p2-gpu: Amazon's p2.xlarge instance, 4 Xeon E5-2686 CPU cores 2.30GHz each with 46MB cache, 60GB RAM, 1 Tesla K80 GPU

Here are the times it took to train one epoch as well as how long it would have taken to train for 540 epochs (it took 540 epochs to get the final solution):

• x1-cpu: 40 minutes/epoch, 15 days to train
• g2.8-cpu: 6:24/epoch, 2 days 9 hours 36 minutes to train
• g2.2-cpu: 16:15/epoch, 6 days, 2 hours 15 minutes to train
• g2.8-gpu: 1:37/epoch, 14 hours, 33 minutes to train
• g2.2-gpu: 1:37/epoch, 14 hours, 33 minutes to train
• p2-gpu: 56 seconds/epoch, 8 hours, 24 minutes to train

I was quite surprised by these results. I knew that GPUs are better suited for this purpose, but I did not know that they are this much better. The slowness of the laptop might have been due to swapping. I run the test with the usual (unused) laptop workload and Chrome taking a lot of RAM. I was not doing anything during the test, though. When testing with g2.8-cpu, it looked like only 24 out of the 32 CPU cores were busy. Three additional GPUs on g2.8-gpu did not seem to have made any difference. TensorFlow allows you to pin operations to devices, but I did not do any of that. The test just runs the same exact graph as g2.2-gpu. There's likely a lot to gain by doing manual tuning.

## The results

I tested it on pictures of a bunch of French and Swiss road signs taken around where I live. These are in some cases different from their German counterparts. When the network had enough training examples, it generalized well, otherwise, not so much. In the images below, you'll find sample sign pictures and the top three logits returned by the classifier after applying softmax.

Speed limit - training (top) vs. French (bottom)

No entry - training (top) vs. French (bottom)

Traffic lights - training (top) vs. French (bottom)

No trucks - training (top) vs. French (bottom)

## Intro

I must be getting old and eccentric. I have recently started working on the Coursera's Scala Specialization. It's all great, but my first realization was that the tools they use are not going to work for me. The problem lies mainly with sbt - their build tool. It fetches and installs in your system God knows what from God knows where to bootstrap itself. I don't trust this kind of behavior in the slightest. I understand that automatic pulling of dependencies and auto-update may save work, but they are also dangerous. I don't even want to mention that they contribute to general bloat and sluggishness that plagues the Java world. You don't need to know what depends on what, so everything uses everything, without a single thought.

Having said all that, I do trust and use QuickLisp. So, go figure.

I would still like to take the class, but I would like to do it using Emacs and command line. Here's what I did.

## Prerequisites

You'll need the following packages:

``````==> sudo apt-get install default-jdk scala ant junit4 scala-mode-el
``````

The assignments they give seem to have a stub for implementation and a bunch of scalatest test suites. I will also want to write some other code to play with things. This is the directory structure I used:

``````==> find -type f
./01-hello-world/build.xml
./01-hello-world/src/HelloWorld.scala
./02-square-root-newton/build.xml
./02-square-root-newton/src/SquareRoot.scala
./a00-lists/build.xml
./a00-lists/src/example/Lists.scala
./a00-lists/test/example/ListsSuite.scala
./a01-pascal-balance-countchange/build.xml
./a01-pascal-balance-countchange/src/recfun/Main.scala
./a01-pascal-balance-countchange/test/recfun/BalanceSuite.scala
./a01-pascal-balance-countchange/test/recfun/CountChangeSuite.scala
./a01-pascal-balance-countchange/test/recfun/PascalSuite.scala
./build.properties
./build-targets.xml
./lib/get-libs.sh
``````

You can get all this here and will need to run the `get-libs.sh` script to fetch the two scalatest jar files before you can do anything else.

## Ant

I wrote an ant build template that sets up `scalac` and `scalatest` tasks and provides `compile` and `test` targets. All that the `build.xml` files in the subdirectories need to do is define some properties and import the template:

```1 <project default="compile">
2   <property name="jar.name" value="recfun.jar" />
3   <property name="jar.class" value="recfun.Main" />
4   <property name="tests-wildcard" value="recfun" />
5   <import file="../build-targets.xml" />
6 </project>
```
• `jar.name` is the name of the resulting jar file
• `jar.class` is the default class that should run
• `test-wildcard` is the name of the package containing the test suites (they are discovered automatically)

You can then run the thing (some output has been omitted for clarity):

``````]==> ant compile
init:
[mkdir] Created dir: ./build/classes
[mkdir] Created dir: ./build-test

compile:
[scalac] Compiling 1 source file to ./build/classes
[jar] Building jar: ./build/recfun.jar

]==> ant test
compile-test:
[scalac] Compiling 3 source files to ./build-test

test:
[scalatest] Discovery starting.
[scalatest] Discovery completed in 127 milliseconds.
[scalatest] Run starting. Expected test count is: 11
[scalatest] BalanceSuite:
[scalatest] - balance: '(if (zero? x) max (/ 1 x))' is balanced
[scalatest] - balance: 'I told him ...' is balanced
[scalatest] - balance: ':-)' is unbalanced
[scalatest] - balance: counting is not enough
[scalatest] PascalSuite:
[scalatest] - pascal: col=0,row=2
[scalatest] - pascal: col=1,row=2
[scalatest] - pascal: col=1,row=3
[scalatest] CountChangeSuite:
[scalatest] - countChange: example given in instructions
[scalatest] - countChange: sorted CHF
[scalatest] - countChange: no pennies
[scalatest] - countChange: unsorted CHF
[scalatest] Run completed in 246 milliseconds.
[scalatest] Total number of tests run: 11
[scalatest] Suites: completed 4, aborted 0
[scalatest] Tests: succeeded 11, failed 0, canceled 0, ignored 0, pending 0
[scalatest] All tests passed.
``````

## Submiting the assignments

I could probably write some code to do the assignment submission in python or using ant, but I am too lazy for that. I will use the container handling script that I wrote for another occasion and install `sbt` in there. The `docker/devel` sub dir contains a `Dockerfile` for an image that has `sbt` installed in it.

``````==> git clone https://github.com/ljanyst/jail.git
==> cd jail/docker/devel
==> docker build -t jail:v01-dev .
``````

This is the configurations script:

```CONT_HOSTNAME=jail-scala
CONT_HOME=\$HOME/Contained/jail-scala/home
CONT_NAME=jail:v01-dev
CONT_RESOLUTION=1680x1050
```

## Intro

The game code up until this point abuses timers a lot. It has a timer to handle rendering and to refresh the display, and a timer to change notes of a tune. These tasks are not very time sensitive. A couple of milliseconds delay here or there is not going to be noticeable to users. The timer interrupts are more appropriate for things like maintaining a sound wave of the proper frequency. A slight delay here lowers the quality of the user experience significantly.

We could, of course, do even more complex time management to handle both the graphics and the sound in one loop, but that would be painful. It's much nicer to have a scheduling system that can alternate between multiple threads of execution. It is what I will describe in this post.

## Thread Control Block and the Stack

Since there's usually only one CPU, the threads need to share it. The easiest way to achieve time sharing is to have a fixed time slice at the end of which the system will switch to another thread. The `systick` interrupt perfect for this purpose. Not only is it invoked periodically, but it can also by requested manually by manipulating a register. This property will be useful in implementation of sleeping and blocking.

But first things first: we need to have a structure that will describe a thread, a. k. a. a Thread Control Block:

```1 struct IO_sys_thread {
2   uint32_t             *stack_ptr;
3   uint32_t              flags;
4   void (*func)();
6   uint32_t              sleep;
7   IO_sys_semaphore     *blocker;
8   uint8_t               priority;
9 };
```
• `stack_ptr` - points to the top of the thread's stack
• `flags` - properties describing the thread; we will need just one to indicate whether the thread used the floating-point coprocessor
• `func` - thread's entry point
• `next` - pointer to the next thread in the queue (used for scheduling)
• `sleep` - number of milliseconds the thread still needs to sleep
• `blocker` - a pointer to a semaphore blocking the thread (if any)
• `priority` - thread's priority

When invoking an interrupt handler, the CPU saves most of the running state of the current thread to the stack. Therefore, the task of the interrupt handler boils down to switching the stack pointers. The CPU will then pop all the registers back from the new stack. This behavior means that we need to do some initialization first:

``` 1 void IO_sys_stack_init(IO_sys_thread *thread, void (*func)(void *), void *arg,
2   void *stack, uint32_t stack_size)
3 {
4   uint32_t sp1 = (uint32_t)stack;
5   uint32_t sp2 = (uint32_t)stack;
6   sp2 += stack_size;
7   sp2 = (sp2 >> 3) << 3;          // the stack base needs to be 8-aligned
8   if(sp1 % 4)
9     sp1 = ((sp1 >> 2) << 2) + 4;  // make the end of the stack 4-aligned
10   stack_size = (sp2 - sp1) / 4;   // new size in double words
11
12   uint32_t *sp = (uint32_t *)sp1;
13   sp[stack_size-1] = 0x01000000;          // PSR with thumb bit
14   sp[stack_size-2] = (uint32_t)func;      // program counter
15   sp[stack_size-3] = 0xffffffff;          // link register
16   sp[stack_size-8] = (uint32_t)arg;       // r0 - the argument
17   thread->stack_ptr = &sp[stack_size-16]; // top of the stack
18 }
```

The ARM ABI requires that the top of the stack is 8-aligned and we will typically push and pop 4-byte words. The first part of the setup function makes sure that the stack boundaries are right. The second part sets the initial values of the registers. Have a look here for details.

• the `PSR` register needs to have the Thumb bit switched on
• we put the startup function address to the program counter
• we put `0xffffffff` to the link register to avoid confusing stack traces in GDB
• `r0` gets the argument to the startup function
• an interrupt pushes 16 words worth of registers to the stack, so the initial value of the stack pointer needs to reflect that

This function is typically called as:

```1 IO_sys_stack_init(thread, thread_wrapper, thread, stack, stack_size);
```

Note that we do not call the user thread function directly. Rather we have a wrapper function that gets the TBC as its argument. It is because we need to remove the thread from the scheduling queue if the user-specified function returns.

## The context switcher

Let's now have a look at the code that does the actual context switching. Since it needs to operate directly on the stack, it needs to be written in assembly. It is not very complicated, though. What it does is:

• pushing some registers to the stack
• storing the current stack pointer in the `stack_ptr` variable of the current TCB
• calling the scheduler to select the next thread
• popping some registers from the new stack
``` 1 #define OFF_STACK_PTR 0
2 #define OFF_FLAGS     4
3 #define FLAG_FPU      0x01
4
5   .thumb
6   .syntax unified
7
8   .global IO_sys_current
9   .global IO_sys_schedule
10
11   .text
12
13   .global systick_handler
14   .type systick_handler STT_FUNC
15   .thumb_func
16   .align  2
17 systick_handler:
18   cpsid i                     ; disable interrupts
19   push  {r4-r11}              ; push r4-11
20   ldr   r0, =IO_sys_current   ; pointer to IO_sys_current to r1
21   ldr   r1, [r0]              ; r1 = OS_current
22
23   ubfx  r2, lr, #4, #1        ; extract the fourth bit from the lr register
24   cbnz  r2, .Lsave_stack      ; no FPU context to save
25   vstmdb sp!, {s16-s31}       ; push FPU registers, this triggers pushing of
26                               ; s0-s15
27   ldr   r2, [r1, #OFF_FLAGS]  ; load the flags
28   orr   r2, r2, #FLAG_FPU     ; set the FPU context flag
29   str   r2, [r1, #OFF_FLAGS]  ; store the flags
30
31 .Lsave_stack:
32   str   sp, [r1, #OFF_STACK_PTR] ; store the stack pointer at *OS_current
33
34   push  {r0, lr}              ; calling c code, so store r0 and the link
35                               ; register
36   bl    IO_sys_schedule       ; call the scheduler
37   pop   {r0, lr}              ; restore r0 and lr
38
39   ldr   r1, [r0]              ; load the new TCB pointer to r1
40   ldr   sp, [r1, #OFF_STACK_PTR] ; get the stack pointer of the new thread
41
42   orr   lr, lr, #0x10         ; clear the floating point flag in EXC_RETURN
43   ldr   r2, [r1, #OFF_FLAGS]  ; load the flags
44   tst   r2, #0x01             ; see if we have the FPU context
45   beq   .Lrestore_regs        ; no FPU context
46   vldmia sp!, {s16-s31}       ; pop the FPU registers
47   bic   lr, lr, #0x10         ; set the floating point flag in EXC_RETURN
48
49 .Lrestore_regs:
50   pop   {r4-r11}              ; restore regs r4-11
51   cpsie i                     ; enable interrupts
52   bx    lr                    ;  exit the interrupt, restore r0-r3, r12, lr, pc,
53                               ; psr
```

The only complication here is that we sometimes need to store the floating point registers in addition to the regular ones. It is, however, only necessary if the thread used the FPU. The fourth bit of `EXC_RETURN`, the value in the `LR` register, indicates the status of the FPU. Go here and here for more details. If the value of the bit is `0`, we need to save the high floating-point registers to the stack and set the FPU flag in the TCB.

Also, after selecting the new thread, we check if its stack contains the FPU registers by checking the FPU flag in its TCB. If it does, we pop these registers and change `EXC_RETURN` accordingly.

The Lazy Stacking is taken care of by simply pushing and popping the high registers - it counts as an FPU operation.

## Semaphores, sleeping and idling

We can now run threads and switch between them, but it would be useful to be able to put threads to sleep and make them wait for events.

Sleeping is easy. We just need to set the `sleep` field in the TCB of the current thread and make the scheduler ignore threads whenever their `sleep` field is not zero:

```1 void IO_sys_sleep(uint32_t time)
2 {
3   IO_sys_current->sleep = time;
4   IO_sys_yield();
5 }
```

The ISR that handles the system time can loop over all threads and decrement this counter every millisecond.

Waiting for a semaphore works in a similar way. We mark the current thread as blocked:

``` 1 void IO_sys_wait(IO_sys_semaphore *sem)
2 {
3   IO_disable_interrupts();
4   --*sem;
5   if(*sem < 0) {
6     IO_sys_current->blocker = sem;
7     IO_sys_yield();
8   }
9   IO_enable_interrupts();
10 }
```

The purpose of `IO_sys_yield` is to indicate that the current thread does not need to run anymore and force a context switch. The function resets the `systick` counter and forces the interrupt:

```1 void IO_sys_yield()
2 {
3   STCURRENT_REG = 0;          // clear the systick counter
4   INTCTRL_REG   = 0x04000000; // trigger systick
5 }
```

Waking a thread waiting for a semaphore is somewhat more complex:

``` 1 void IO_sys_signal(IO_sys_semaphore *sem)
2 {
3   IO_disable_interrupts();
4   ++*sem;
5   if(*sem <= 0 && threads) {
7     for(t = threads; t->blocker != sem; t = t->next);
8     t->blocker = 0;
9   }
10   IO_enable_interrupts();
11 }
```

If the value of the semaphore was negative, we find a thread that it was blocking and unblock it. It will make the scheduler consider this thread for running in the future.

None of the user-defined threads may be runnable at the time the scheduler makes its decision. All of them may be either sleeping or waiting for a semaphore. In that case, we need to keep the CPU occupied with something, i.e., we need a fake thread:

```1 static void iddle_thread_func(void *arg)
2 {
3   (void)arg;
4   while(1) IO_wait_for_interrupt();
5 }
```

## Scheduler

The system maintains a circular linked list of TCBs called `threads`. The job of the scheduler is to loop over this list and select the next thread to run. It places its selection in a global variable called `IO_sys_current` so that other functions may access it.

``` 1 void IO_sys_schedule()
2 {
5     return;
6   }
7
9
12
15   int            prio = 266;
16
17   do {
18     if(!cur->sleep && !cur->blocker && cur->priority < prio) {
19       sel = cur;
20       prio = sel->priority;
21     }
22     cur = cur->next;
23   }
24   while(cur != stop);
25
26   if(!sel)
28
29   IO_sys_current = sel;
30 }
```

This scheduler is simple:

• whenever there is nothing to run, select the idle thread
• otherwise select the next highest priority thread that is neither sleeping nor blocked on a semaphore

## Starting up the beast

So how do we get this whole business running? We need to invoke the scheduler that will preempt the current thread and select the next one to run. The problem is that we're running using the stack provided by the bootstrap code and don't have a TCB. Nothing prevents us from creating a dummy one, though. We can create it on the current stack (it's useful only once) and point it to the beginning of our real queue of TCBs:

```1 IO_sys_thread dummy;
3 IO_sys_current = &dummy;
```

We then set the `systick` up:

```1 STCTRL_REG     = 0;            // turn off
2 STCURRENT_REG  = 0;            // reset
3 SYSPRI3_REG   |= 0xE0000000;   // priority 7
```

And force its interrupt:

```1 IO_sys_yield();
2 IO_enable_interrupts();
```

## Tests

Tests 11 and 12 run a dummy calculation for some time and then return. After this happens, the system can only run the idle thread. If we plug-in the profiler code, we can observe the timings on a logic analyzer:

Test #11

Test 13 is more complicated than the two previous ones. Three threads are running in a loop, sleeping, and signaling semaphores. Two more threads are waiting for these semaphores, changing some local variables and signaling other semaphores. Finally, there is the writer thread that blocks on the last set of semaphores and displays the current state of the environment. The output from the logic analyzer shows that the writer thread needs around 3.3 time-slices to refresh the screen:

Test #13

How all this makes Silly Invaders better? The main advantage is that we don't need to calculate complex timings for multiple functions of the program. We create two threads, one for rendering of the scene and another one for playing the music tune. Each thread cares about its own timing. Everything else takes care of itself with good enough time guarantees.

``` 1 IO_sys_thread game_thread;
3 {
4   while(1) {
5     SI_scene_render(&scenes[current_scene].scene, &display);
6     IO_sys_sleep(1000/scenes[current_scene].scene.fps);
7   }
8 }
9
12 {
13   IO_sound_player_run(&sound_player);
14 }
```

The threads are registered with the system in the `main` function:

```1 IO_sys_thread_add(&game_thread,  game_thread_func,  2000, 255);
3 IO_sys_run(1000);
```

For the complete code see:

## Referrences

There is a great course on EdX called Realtime Bluetooth Networks explaining the topic in more details. I highly recommend it.

## Intro

Life would be so much easier if all the software was open source and came packaged with Debian. Much of the stuff I use is available this way, but there are still some programs that come as binary blobs with custom installers. I don't like that. I don't trust that. Every now and then, you hear the stories of software coming from reputable companies and misbehaving in dramatic ways. It would be great to contain the potential damage, but running virtual machines on a laptop is a pain. As it turns out, things may work pretty well with Docker, but as usual, the configuration is not so trivial.

Terminal

## The X Server

The solutions I found on the Internet either share the main X server with the container or use VNC. The first approach is problematic because apparently the X architecture has been designed by happy hippies and has no notion of security. If two applications share a screen, for instance, one can sniff the keystrokes typed into the other, and all this is by design. The VNC solution, on the other hand, is terribly slow: windows smudge when moved, and the Netflix playback is lagging.

Starting a Xephyr instance on the host and sharing its socket with the container seems to solve the sniffing problem. The programs running inside the container can't listen to the keystrokes typed outside of it anymore. Xephyr is also fast enough to handle high-resolution movie playback smoothly.

You can start Xephyr like this:

``````Xephyr :1 -ac -br -screen 1680x1050 -resizeable
``````

The server will run as display `:1` in a resizable window of initial size defined by the `screen` parameter. Adding the following to the Docker command line makes the server visible inside of the container:

``````-e DISPLAY=:1 -v /tmp/.X11-unix/X1:/tmp/.X11-unix/X1
``````

The only remaining pain point is the fact that you cannot share the clipboard by default. Things copied outside of the container do not paste inside and vice versa. The `xsel` utility and a couple of lines of bash code can solve this problem easily:

``` 1 CLIP1=""
2 CLIP2=`xsel -o -b --display :1`
3 while true; do
4   CLIP1NEW=`xsel -o -b --display :0`
5   CLIP2NEW=`xsel -o -b --display :1`
6   if [ "x\$CLIP1" != "x\$CLIP1NEW" ]; then
7     xsel -o -b --display :0 | xsel -i -b --display :1
8     CLIP1=\$CLIP1NEW
9   fi;
10   if [ x"\$CLIP2" != x"\$CLIP2NEW" ]; then
11     xsel -o -b --display :1 | xsel -i -b --display :0
12     CLIP2=\$CLIP2NEW
13   fi
14   sleep 1
15 done
```

## Audio

Making the audio work both ways (the sound and the microphone) is surprisingly easy with PulseAudio. The host just needs to configure the native protocol plug-in and ensure that port `4713` is not blocked by the firewall:

``````pactl load-module module-native-protocol-tcp auth-ip-acl=172.17.0.2
``````

All you need to do in the container is making sure that the `PULSE_SERVER` envvar points to the host. It is less straightforward than you might expect when you run a desktop environment and don't want to start all your programs in a terminal window. For XFCE, I do the following in the script driving the container:

``` 1 mkdir -p /home/prisoner/.config/xfce4
2 chown prisoner:prisoner -R /home/prisoner
3 XINITRC=/home/prisoner/.config/xfce4/xinitrc
4 rm -f \$XINITRC
5
6 echo -e "#!/bin/sh\n\n" >> \$XINITRC
7 echo -e "export PULSE_SERVER=\$PULSE_SERVER\n\n" >> \$XINITRC
8 tail -n +2 /etc/xdg/xfce4/xinitrc >> \$XINITRC
9
10 sudo -u prisoner startxfce4
```

This code prepends the appropriate `export` statement to XFCE's `xinitrc` file and assumes that the `PULSE_SERVER` variable is known inside the container:

``````-e PULSE_SERVER=172.17.42.1
``````

I also disable the local PulseAudio server in XFCE. It has strange interactions with the one running on the host.

## Jail

Doing all this by hand every time you want to run the container is way too painful. I wrote a script to automate the process. It can:

• set up the PulseAudio server
• forward static devices (i.e., `/dev/video0`)
• forward USB devices if they are present (i.e., `21a9:1006` as `/dev/bus/usb/001/016`)
• run the Xephyr X server instance for the container
• forward the clipboards
• set up docker's command line to make everything work together

This is what it all looks like:

``````]==> jail.sh
[i] Running jail: 20161121-213528
[i] Container: jail:v01
[i] Hostname: jail
[i] Home: /home/ljanyst/Contained/jail/home
[i] Setting up the local PulseAudio server (172.17.42.1)... OK
[i] Attaching device /dev/video0
[i] USB device 21a9:1006 not present
[i] Running Xephyr display at :1 (1680x1050)... OK
[i] Running docker... OK
[i] Removing container a545365592c1... OK
[i] Killing clipboard forwarder, PID: 2776... DONE
[i] Killing Xephyr, PID: 2767... DONE
[i] All done. Bye!
``````

Skype

Netflix

Saleae Logic Analyzer

## Intro

It took some playing around, but I have finally managed to figure out how to build from source all the tools necessary to put Zephyr on Arduino 101. You may say that the effort is pointless because you could just use whatever is provided by the SDK. For me, however, the deal is more about what I can learn from the experience that about the result itself. There is enough open source code around to make things work reasonably well, but putting it all together is a bit of a challenge, so what follows is a short HOWTO.

Arduino 101 setup

## Toolchain

Arduino 101 has a Quark core and an ARC EM core. The appropriate targets seem to be `i586-none-elfiamcu` and `arc-none-elf` for the former and the later respectively. Since there is no pre-packaged toolchain for either of these in Debian, you'll need to build your own. You can use the vanilla binutils (version 2.27 worked for me) and the vanilla newlib (version 2.4.0.20160527 did not require any patches). GCC is somewhat more problematic. Since apparently not all the necessary ARC patches have been accepted into the mainline yet, you'll need to download it from the Synopsys GitHub repo. GDB requires tweaking for both cores.

binutils:

``````]==> mkdir binutils && cd binutils
]==> wget https://ftp.gnu.org/gnu/binutils/binutils-2.27.tar.bz2
]==> tar jxf binutils-2.27.tar.bz2
]==> mkdir i586-none-elfiamcu && cd i586-none-elfiamcu
]==> ../binutils-2.27/configure --prefix=/home/ljanyst/Apps/cross-compilers/i586-none-elfiamcu --target=i586-none-elfiamcu
]==> make -j12 && make install
]==> cd .. && mkdir arc-none-elf && arc-none-elf
]==> ../binutils-2.27/configure --prefix=/home/ljanyst/Apps/cross-compilers/arc-none-elf --target=arc-none-elf
]==> make -j12 && make install
]==> cd ../..
``````

gcc:

``````]==> mkdir gcc && cd gcc
]==> wget ftp://ftp.uvsq.fr/pub/gcc/releases/gcc-6.2.0/gcc-6.2.0.tar.bz2
]==> tar jxf gcc-6.2.0.tar.bz2
]==> git clone git@github.com:foss-for-synopsys-dwc-arc-processors/gcc.git
]==> cd gcc && git checkout arc-4.8-dev && cd ..
]==> mkdir i586-none-elfiamcu && cd i586-none-elfiamcu
]==> ../gcc-6.2.0/configure --prefix=/home/ljanyst/Apps/cross-compilers/i586-none-elfiamcu --target=i586-none-elfiamcu --enable-languages=c --with-newlib
]==> make -j12 all-gcc && make install-gcc
]==> cd .. && mkdir arc-none-elf && arc-none-elf
]==> ../gcc/configure --prefix=/home/ljanyst/Apps/cross-compilers/arc-none-elf --target=arc-none-elf  --enable-languages=c --with-newlib --with-cpu=arc700
]==> make -j12 all-gcc && make install-gcc
]==> cd ../..
``````

newlib:

``````]==> mkdir newlib && cd newlib
]==> wget ftp://sourceware.org/pub/newlib/newlib-2.4.0.20160527.tar.gz
]==> tar zxf newlib-2.4.0.20160527.tar.gz
]==> mkdir i586-none-elfiamcu && cd i586-none-elfiamcu
]==> ../newlib-2.4.0.20160527/configure --prefix=/home/ljanyst/Apps/cross-compilers/i586-none-elfiamcu --target=i586-none-elfiamcu
]==> make -j12 && make install
]==> cd .. && mkdir arc-none-elf && arc-none-elf
]==> ../newlib-2.4.0.20160527/configure --prefix=/home/ljanyst/Apps/cross-compilers/arc-none-elf --target=arc-none-elf
]==> make -j12 && make install
]==> cd ../..
``````

libgcc:

``````]==> cd gcc/i586-none-elfiamcu
]==> make -j12 all-target-libgcc && make install-target-libgcc
]==> cd ../arc-none-elf
]==> make -j12 all-target-libgcc && make install-target-libgcc
]==> cd ../..
``````

GDB does not work for either platform out of the box. For Quark it compiles the i386 version but does not recognize the iamcu architecture even though, according to Wikipedia, it's essentially the same as i586 and libbfd knows about it. After some poking around the code, it seems that initilizing the i386 platform with iamcu bfd architecture definition does the trick:

``` 1 diff -Naur gdb-7.11.1.orig/gdb/i386-tdep.c gdb-7.11.1/gdb/i386-tdep.c
2 --- gdb-7.11.1.orig/gdb/i386-tdep.c     2016-06-01 02:36:15.000000000 +0200
3 +++ gdb-7.11.1/gdb/i386-tdep.c  2016-09-24 15:39:11.000000000 +0200
4 @@ -8890,6 +8890,7 @@
5  _initialize_i386_tdep (void)
6  {
7    register_gdbarch_init (bfd_arch_i386, i386_gdbarch_init);
8 +  register_gdbarch_init (bfd_arch_iamcu, i386_gdbarch_init);
9
10    /* Add the variable that controls the disassembly flavor.  */
```

For ARC the Synopsys open source repo provides a solution.

``````]==> mkdir gdb
]==> wget ftp://sourceware.org/pub/gdb/releases/gdb-7.10.1.tar.xz
]==> tar xf gdb-7.11.1.tar.xz
]==> cd gdb-7.11.1 && patch -Np1 -i ../iamcu-tdep.patch && cd ..
]==> git clone git@github.com:foss-for-synopsys-dwc-arc-processors/binutils-gdb.git
]=> cd binutils-gdb && git checkout arc-2016.09-gdb && cd ..
]==> mkdir i586-none-elfiamcu && cd i586-none-elfiamcu
]==> ../gdb-7.11.1/configure --prefix=/home/ljanyst/Apps/cross-compilers/i586-none-elfiamcu --target=i586-none-elfiamcu
]==> make -j12 && make install
]==> cd .. && mkdir arc-none-elf && arc-none-elf
]==> ../binutils-gdb/configure --prefix=/home/ljanyst/Apps/cross-compilers/arc-none-elf --target=arc-none-elf
]==> make -j12 all-gdb && make install-gdb
]==> ../..
``````

## OpenOCD

There was no OpenOCD release for quite some time, and it does not seem to have any support for Quark SE. The situation is not much better if you look at the head of the master branch of their repo. Fortunately, both Intel and Synopsys provide some support for their parts of the platform and making it work with mainline openocd does not seem to be hard.

``````]==> git clone git@github.com:ljanyst/openocd.git && cd openocd
]==> git checkout lj
]==> ./bootstrap
]==> ./configure --prefix=/home/ljanyst/Apps/openocd
]==> make -j12 && make install
``````

Zephyr uses the following configuration for the Arduino (referred to as openocd.conf below):

``` 1 source [find interface/ftdi/flyswatter2.cfg]
2 source [find board/quark_se.cfg]
3
4 quark_se.quark configure -event gdb-attach {
5         reset halt
6         gdb_breakpoint_override hard
7 }
8
9 quark_se.quark configure -event gdb-detach {
10         resume
11         shutdown
12 }
```

You can use the following commands to run the GDB server, flash for Quark and flash for ARC respectively (this is what Zephyr does):

``````]==> openocd -s /home/ljanyst/Apps/openocd/share/openocd/scripts/ -f openocd.cfg  -c 'init' -c 'targets' -c 'reset halt'
]==> openocd -s /home/ljanyst/Apps/openocd/share/openocd/scripts/ -f openocd.cfg  -c 'init' -c 'targets' -c 'targets quark_se.arc-em' -c 'reset halt' -c 'load_image zephyr.bin 0x40010000' -c 'reset halt' -c 'verify_image zephyr.bin 0x40010000' -c 'reset run' -c 'shutdown'
]==> openocd -s /home/ljanyst/Apps/openocd/share/openocd/scripts/ -f openocd.cfg  -c 'init' -c 'targets' -c 'targets quark_se.arc-em' -c 'reset halt' -c 'load_image zephyr.bin 0x40034000' -c 'reset halt' -c 'verify_image zephyr.bin 0x40034000' -c 'reset run' -c 'shutdown'
``````

## Hello world!

You need to compile and flash Zephyr's Hello World sample. The two commands below do the trick for the compilation part:

``````make BOARD=arduino_101_factory CROSS_COMPILE=i586-none-elfiamcu- CFLAGS="-march=lakemont -mtune=lakemont -msoft-float -miamcu -O0"
make BOARD=arduino_101_sss_factory CROSS_COMPILE=arc-none-elf-
``````

After flashing, you should see the following on your UART console:

``````]==> screen /dev/ttyUSB0 115200,cs8
ipm_console0: 'Hello World! arc'
Hello World! x86
``````

## Debugging

If you follow the instructions from the Zephyr wiki, debugging for the Intel part works fine. I still have some trouble making breakpoints work for ARC and will try to write an update if I have time to figure it out.

``````]==> i586-none-elfiamcu-gdb outdir/zephyr.elf
...
(gdb) target remote :3333
Remote debugging using :3333
0x0000fff0 in ?? ()
(gdb) b main
Breakpoint 1 at 0x400100ed: file /home/ljanyst/Projects/zephyr/zephyr-project/samples/hello_world/nanokernel/src/main.c, line 37.
(gdb) c
Continuing.
target running
hit hardware breakpoint (hwreg=0) at 0x400100ed

Breakpoint 1, main () at /home/ljanyst/Projects/zephyr/zephyr-project/samples/hello_world/nanokernel/src/main.c:37
37              PRINT("Hello World! %s\n", CONFIG_ARCH);
(gdb) s
step done from EIP 0x400100ed to 0x400100f2
step done from EIP 0x400100f2 to 0x400100f7
step done from EIP 0x400100f7 to 0x40013129
target running
hit hardware breakpoint (hwreg=1) at 0x4001312f
printk (fmt=0x40013e04 "Hello World! %s\n") at /home/ljanyst/Projects/zephyr/zephyr-project/misc/printk.c:164
164             va_start(ap, fmt);
(gdb) s
step done from EIP 0x4001312f to 0x40013132
step done from EIP 0x40013132 to 0x40013135
165             _vprintk(fmt, ap);
(gdb)
``````