Bike algorithm

I like riding a bike. Because my knee are not healthy now, I cannot ride a bike much. However, even little riding makes me feel good. Today it occurs to me if I am addicted at riding a bike. During riding a bike, I was thinking about the algorithm commands me. Silly me thought it is funny.

Thus, I tried to make a flowchart of the algorithm. Tried to use some online service to draw it. Soon, I thought Wow…, coding it with Python could be faster than this. I made up my mind to use the library, Networkx. It did not take much to realize it was not a good idea. Their own tutorial for the flow is really bad.


My final solution is Graphviz. You can install it using HomeBrew.

$ brew install graphviz

Studying Graphviz was very easy and fun. More fun than using the completed interface for a flowchart, and easier, at least, to me. Copy of the following simple code, and you can just change the set in your own taste. The documents about the parameters are found at Graphviz website

digraph {

  node [

  Step1 [   fontcolor=navy,
            label="Wake \ up"];

  Step1a [   label="Become to want to ride a bike"];

  Step1b [label="Raining?"]

  Step2 [ label="Do you ride a bike now?"];

  Step3 [   color=green,
            shape =pentagon,
            label="Ride a bike"];

  Step4 [   color=crimson,
            shape = egg,
            label="Feel bad"];

  Step5 [label="Time to sleep?"];


  Step1  -> Step1a;
  Step1a -> Step1b;
  Step1b -> Step2 [label=No];
  Step1b -> Step4 [label=Yes];
  Step2 -> Step3 [label="Yes, +1"];
  Step2 -> Step4 [label=No];
  Step3 -> Step5;
  Sleep -> Step1;
  Step4  -> Step5 ;
  Step5 -> Sleep [label=Yes];
  Step5 -> Step1a [label=No]

Save this code file1, and then,

$dot -T png -O

You will get a file.


The difference from normal recurrent graph is the reward, +1. If our goal is to make 50 points-reward, the flow could be probably convergent. At least I would have more than 50 non-raining free days based on the weather statistics in Vancouver and my job-free situation. However, it depends on the weather, and it would take much time in the raining season. There is no learning here, but it is also kind of a silly version of a recursive algorithm, and shows why value-based reinforcement learning needed to evolve to deep reinforcement learning.

  1. I skip the explanation about PATH. [return]