Sunday, February 23, 2020

Framework for Designing Programmable Modules

First, I just want to say that I really like Chisel. As a software developer who has dabbled in creating hardware only occasionally so far, I've found it much more enjoyable to learn and use than Verilog. Though my knowledge of both languages is still limited, Chisel's focus on hardware generation rather than description satisfies my desire to parameterize and automate everything in sight. The learning process for Chisel was much improved with the extensive documentation, including the Bootcamp and API docs. The abundance of open-source Chisel projects also provide valuable examples of how to use certain features. The basic building blocks provided by the language and utilities make it easy to get started creating hardware from scratch. The testing facilities that come standard also enable verification at an early stage, which appeals to me as a Test-Driven Development fanatic. Finally, Scala is a powerful language with many conveniences for writing clear and concise code.
   All of that being said, there are difficulties with hardware design that even Chisel does not fully address. The simple circuits that are taught in tutorials and bootcamps are fine for getting off the ground, but there is a big gap between those and what's required to create a processor or any block that's part of a larger system. Interfaces between memories and caches and other blocks require synchronization or diplomacy, which involves keeping track of valid and/or ready signals in addition to internal state. Improving performance with techniques such as pipelining or parallelization increase the number of elements that have to work together at all times. Then the design becomes significantly more complex, especially for software developers, such as myself, who write mostly procedural programs. The concurrency of hardware is a hard thing to wrap your head around. This is a problem because creating a processor is a logical first goal of a new designer. The abundance of open specifications, toolchains, and compatible software make processors a rewarding project. We don't want to just design hardware but also want to use them to run programs written by ourselves and others. So it feels like there's an owl-drawing problem, where the circles are the existing languages and documentation, and the owl is the processor.



   So we have to add in more of these steps and take hardware generators a step further. Queues and shift registers are useful, but Chisel libraries should continue beyond those to offer frameworks that conform to the user's ultimate requirements more easily. dsptools is one step in the right direction, with its ready-made traits for adding interfaces for a variety of bus protocols. So just like for busses, the transition between processor specification and implementation must be made easier. Thankfully, instruction set specifications all have somewhat similar contents and format: the user visible state, and the instruction encodings and their effects on the state and IO. It should be possible to harness the power of Chisel to generate a processor given a specification pattern that resembles these documents.
   That's exactly what I'm attempting to do with the ProcessingModule framework. In it, a set of instructions and common, user-visible logic elements are defined. Instructions will declare their dependencies on state and/or external resources. An instruction also specifies what action will be done with those resources once they are available. Both sequential and combinational elements can be shared among instructions. Sequential elements can be general purpose register files or control/status registers. Large combinational elements like ALUs can also be shared to improve resource usage of the processor. The implementation will require parameters for data, instruction, and address widths, but there will eventually be more options to insert structural features like pipelining, speculation, and/or instruction re-ordering that improve processor performance but at the cost of additional hardware resources. The framework inherits from Module and features standard Decoupled and Valid interfaces, so it mixes in well with other Chisel code.

abstract class ProcessingModule(dWidth : Int, dAddrWidth : Int, iWidth : Int, queueDepth : Int) extends Module {

  val io = IO(new Bundle {
    val instr = new Bundle {
      val in = Flipped(util.Decoupled(UInt(iWidth.W)))
      val pc = util.Valid(UInt(64.W))
    }
    val data = new Bundle {
      val in = Flipped(util.Decoupled(UInt(dWidth.W)))
      val out = new Bundle {
        val addr = util.Valid(UInt(dAddrWidth.W))
        val value = util.Decoupled(UInt(dWidth.W))
      }
    }
  });

  def initInstrs : Instructions
  …
}

This is the beginning of the ProcessingModule class. First, there are constructor parameters for widths of data loaded and stored from memory, data memory addresses, and instructions. There is also a parameter to specify the depth of a queue that receives incoming instructions. Following that is a basic IO assembly that's divided into instruction and data bundles. The instruction part has a decoupled port for the incoming instructions and an output port for the program counter. The program counter is currently fixed to 64 bits wide, but that will be made parameterizeable in the future. The data bundle has a Decoupled input port and output address and value ports. Following the IO is the one abstract method called initInstrs, which will be called only once later in the constructor. Modules that inherit from ProcessingModule must implement this method to return the logic and instruction set they want to use. The return type is another abstract class called Instructions.

abstract class Instructions {

  def logic : Seq[InstructionLogic]
}

abstract class InstructionLogic (val name : String, val dataInDepend : Boolean, val dataOutDepend : Boolean) {

  def decode( instr : UInt) : Bool

  def load(instr : UInt) : UInt = 0.U

  def execute( instr : UInt) : Unit

  def store(instr : UInt) : UInt = 0.U
}

Subclasses of Instructions should declare logic shared between instructions in their constructor. The logic method also needs to be defined to return a sequence of InstructionLogic. Each instance of the InstructionLogic class represents an instruction. There are parameters for the instruction name and whether or not it depends on memory. The name field currently only exists to distinguish instructions in the Chisel code, but it will eventually prove useful for automatically generating debugging utilities for simulation. The dataInDepend and dataOutDepend parameters should be set to true if the instruction will read from or write to memory respectively. The virtual methods within the InstructionLogic class roughly correspond to the stages in a traditional pipelined processor architecture. decode and execute are required to be implemented by all instructions. decode takes a value given to the processor via the instruction bus and outputs a high value if its a match for this particular instruction type. Then, the rest of the stages will be run for that InstructionLogic instance. execute takes the same instruction value and performs some operation on the processor state. It does not return any value. If a memory dependency exists for the instruction, then the load and/or store methods will also be called, so they should also be implemented by the designer. Both of these methods should return the address in memory that should be accessed. For instructions that read from memory, the value in memory at that address will be retrieved and stored in the dataIn register. For instructions that store to memory, the value in the dataOut register will be stored at the given address.
   Following is a simple example of a processor that simply adds numbers to a pair of registers. Some common routines for extracting subfields from an instruction are defined at the top. In the initInstrs method, an Instructions instance is created with the 2-element register array that's accessible to all instructions. Then the logic method begins by defining a nop instruction, which contains only logic that indicates if the current instruction is a nop or not. There is no logic defined in the execute method, because the instruction does not do anything.

class AdderModule(dWidth : Int) extends ProcessingModule(dWidth, AdderInstruction.addrWidth, AdderInstruction.width, 3) {

  def getInstrCode(instr : UInt) : UInt = instr(2,0)
  def getInstrReg(instr : UInt) : UInt = instr(3)
  def getInstrAddr(instr : UInt) : UInt = instr(7,4)

  def initInstrs = new Instructions {
    val regs = RegInit(VecInit(Seq.fill(2){ 0.U(dWidth.W) }))
    def logic = {
      new InstructionLogic("nop", dataInDepend=false, dataOutDepend=false) {
        def decode ( instr : UInt ) : Bool = getInstrCode(instr) === AdderInstruction.codeNOP
        def execute ( instr : UInt ) : Unit = Unit
      } ::
      new InstructionLogic("incrData", dataInDepend=true, dataOutDepend=false) {
        …
      }
      …
    }
  }
}

The next instruction, incr1, increments the specified register by 1. The register to increment is determined from a subfield in the instruction, which is extracted in the execute stage with the getInstrReg method defined above.

new InstructionLogic("incr1", dataInDepend=false, dataOutDepend=false) {

  def decode ( instr : UInt ) : Bool = {
    getInstrCode(instr) === AdderInstruction.codeIncr1
  }

  def execute ( instr : UInt ) : Unit = {
    regs(getInstrReg(instr)) := regs(getInstrReg(instr)) + 1.U
  }
}

The incrData instruction increments a register by a number stored in memory. The dataInDepend parameter for this instruction is set to true since it needs to read from memory. The logic method is implemented here to provide the address to read from, which also comes from a subfield of the instruction. The value from memory is then automatically stored in the built-in dataIn register, which is used in the execute method.

new InstructionLogic("incrData", dataInDepend=true, dataOutDepend=false) {

  def decode ( instr : UInt ) : Bool = {
    getInstrCode(instr) === AdderInstruction.codeIncrData
  }

  override def load ( instr : UInt ) : UInt = getInstrAddr(instr)

  def execute ( instr : UInt ) : Unit = {
    regs(getInstrReg(instr)) := regs(getInstrReg(instr)) + dataIn
  }
}

The store instruction stores a register value to memory, and thus has its dataOutDepend parameter set to true. The dataOut register is written in the execute method. The value in the dataOut register will be stored at the address returned by the store method.

new InstructionLogic("store", dataInDepend=false, dataOutDepend=true) {

  def decode ( instr : UInt ) : Bool = {
    getInstrCode(instr) === AdderInstruction.codeStore
  }

  def execute ( instr : UInt ) : Unit = {
    dataOut := regs(getInstrReg(instr))
  }

  override def store ( instr : UInt ) : UInt = getInstrAddr(instr)
}

bgt (Branch if Greater Than) skips the next instruction if the specified register is greater than zero. This is implemented by adding 2 to the built-in pcReg register in the execute method.

new InstructionLogic("bgt", dataInDepend=false, dataOutDepend=false) {

  def decode ( instr : UInt ) : Bool = {
    getInstrCode(instr) === AdderInstruction.codeBGT
  }

  def execute ( instr : UInt ) : Unit = {
    when ( regs(getInstrReg(instr)) > 0.U ) { pcReg.bits := pcReg.bits + 2.U }
  }
}

Testing ProcessingModule began with the OrderedDecoupledHWIOTester from the iotesters package. The class makes it easy to define a sequence of input and output events without having to explicitly define the exact number of cycles to advance or which ports to peek and poke at. The logging abilities also enable some debugging without having to inspect waveforms. Even with these advantagges, I found it lacking in some aspects and even encountered a bug that hindered my progress for several days. Therefore, I created my own version of the class called DecoupledTester. This new class orders input and output events together instead of executing all input events immediately. By default, it fails the test when the maximum tick count is exceeded, which usually happens if the design under test incorrectly blocks on an input. DecouledTester also automatically initializes all design inputs, thus decreasing test sizes and elaborating errors. Finally, the log messages emitted by tests are slightly more verbose and clearly formatted. The following is an example of a test written for the AdderModule described above:

it should "increment by 1" in {
  assertTesterPasses {
    new DecoupledTester("incr1"){

      val dut = Module(new AdderModule(dWidth))

      val events = new OutputEvent((dut.io.instr.pc, 0)) ::
      new InputEvent((dut.io.instr.in, AdderInstruction.createInt(codeIncr1, regVal=0.U))) ::
      new OutputEvent((dut.io.instr.pc, 1)) ::
      new InputEvent((dut.io.instr.in, AdderInstruction.createInt(codeStore, regVal=0.U))) ::
      new OutputEvent((dut.io.data.out.value, 1)) ::
      Nil
    }
  }
}

This is an example of output from the test when the design is implemented correctly:

Waiting for event 0: instr.pc = 0
Waiting for event 1: instr.in = 1
Waiting for event 2: instr.pc = 1
Waiting for event 2: instr.pc = 1
Waiting for event 2: instr.pc = 1
Waiting for event 3: instr.in = 3
Waiting for event 4: data.out.value = 1
Waiting for event 4: data.out.value = 1
Waiting for event 4: data.out.value = 1
All events completed!

The framework does well enough for the simple examples explained here, but my next goal is to prove its utility with a "real" instruction set. The first target is RISC-V, followed by other open architectures like POWER and OpenRISC. In parallel to these projects, I'll work on improving the designer interface of the framework by reducing boilerplate code and enhancing debugging capabilities. Once some basic processor implementations have been written and tested, there will be enhancements to improve performance by pipelining, branch prediction, and instruction re-ordering. In the meantime, here are the slides for this presentation and a PDF containing the AdderModule example that fits on a 6x4 inch flash card.

Wednesday, October 2, 2019

Electronic Holiday Cards

As the holidays approached last year, I pondered what I should prepare as gifts for friends and family. I always stress out about choosing presents because I want to make sure that I give something that's useful and/or long-lasting while accurately reflecting the relationship between me and the receiver. Cards with hand-written notes are highly valued in my social circle, but I want to give more than just piece of paper. Being an electronics enthusiast, I like to have circuits in my cards, but the ones sold in stores are almost universally annoying and bad. Thus began my quest to design and make my own electronic holiday cards.


   My high-level goals at the outset were to create an electronic card that is highly interactive, can be programmed, and shows off my nerdy side. Store-bought cards usually just play a single audio clip when you open them, but I wanted to add a wider range of sounds and more ways to trigger them. Different types of effects other than audio would be cool as well; I've never seen a card with lights, for instance. Making the card re-programmable would serve two main purposes: enabling software fixes and upgrades after construction, and introducing people to the idea of modifying and extending their gift. Few of my family members know any code, but I like to press the topic in hopes they'll come to learn and enjoy it in time. In addition to being able to program it, adding operating notes and source code listings directly to the card would help encourage them to learn more. It's also a expression of my character that's visible on the card's exterior.




   The design is an electronic keyboard with four keys that can only play one note at a time. Each key represents one bit in an LSB-first four-bit number that selects a note out of a single-octave scale ranging from A3 to A4 in half-step increments. For example, pressing just K0 (the leftmost key) plays A3. Pressing K0 and K1 together plays B3 flat. This scheme leaves a few key combinations left over, so I plan to use those as triggers for additional sequences or modes in the future.
   The materials are a mix of corrugated cardboard, cardstock, and glue. The cardboard makes for a much chunkier card than I had intended, but it actually feels nice and solid in the hands. The keys are implemented by partially cutting tabs into one side of the cardboard so they would fold down while remaining attached to the rest of the body. Underneath are rubber dome buttons that give the keys lots of travel and don't produce unwanted clicking noises. Paired with each key is a colored LED that lights when the key is pressed. The LEDs are Adafruit Sequins that have large solder pads and built-in current-limiting resistors. These make assembly much easier. Both the LEDs and keys are wired with slices of old IDE hard drive cables that thread through the cardboard up to the controller at the top half of the card. The controller is an ATMEGA328P in a 28-pin DIP package. At first, I thought that package would be too physically large, but it actually just fits within a cutout in the cardboard. Plus, I've only ever soldered through-hole parts before, so it improved ease of assembly. In addition to the buttons and LEDs, the controller is connected to the speaker driver, programming port and battery. The driver is another Adafruit product based on the PAM8302 that has adjustable volume and a shutdown pin. It drives an 8 ohm 0.25 watt speaker with a diameter of 1.5" that fits nicely next to the controller at the top of the card. The programming port at the top is connected directly to the VCC, RESET, MISO, MOSI, SCK, and GND pins on the controller. The programming protocol is specified in application note AVR910, and it works perfectly with the USBTinyISP programmer I have in my toolbox. Adjacent to the port is the battery holder and a switch to direct power from either the port or the battery. The battery holder opens at the edge of the card and accepts CR2032 coin cell batteries.


   I was careful to plan ahead and worked for several weeks before the big holiday party where I would give out the cards, but I still completely blew the deadline and didn't get the first set of five cards out until after New Years. Despite that disappointment, this was one of the most fun and satisfying projects that I had ever worked on. On their own, the electronics, software, and mechanics are very simple, but bringing them all together was both challenging and rewarding. When people did finally receive their cards, they genuinely enjoyed them. All of the cards worked well even after some use, though failures did include paper partially detaching from the cardboard and thin speaker wires breaking off of the driver.


   With this year's holidays coming up, I definitely want to change things to make the cards easier to manufacture. The point to point wiring in the current version consumes requires several hours of soldering, cutting, and stripping for a single card. Therefore, I want to finally learn PCB design so that I can simply wire pairs of boards together and sandwich them between paper and/or cardboard. In addition, I want to add many more features to the software (available here) like pre-programmed music sequences and a Simon-like game. Some fancy pop-outs that add additional dimensions when the card is opened would be cool too.

Tuesday, September 3, 2019

Middle School Robotics Classes with Johnny-Five


Last summer, I was offered the opportunity to hold robotics classes at a nearby middle school. I jumped at the chance because it would allow me to have complete control over the curriculum. I had already spent several years helping with a high shool's FIRST Robotics Competition (FRC) team, but I was starting to feel constrained by the rules and expectations of the games. This time, there would be no need to balance the students' fun and education with success at regional competitions.
   Not to say that there weren't downsides. While at least one other mentor was present to help the students, none of them had any previous experience with robotics. Classes were held in a normal middle school classroom with no access to any tools or parts. The school provided computers to all students, but they were locked-down Chromebooks that I couldn't install any additional software on.
   These conditions led us to use the Sparkfun Inventors Kit 4.0 as a base for the hardware. It's relatively affordable and comes with all the parts needed to assemble a simple driving chassis. The electronics include the RedBoard, which is Sparkfun's Arduino Uno clone, and a handful of sensors, LEDs, buttons, and switches. A nice printed manual with step-by-step instructions and lots of illustrations is also provided. We rarely used it in our classes, but it did provide inspiration and guidance when planning the projects. The only flaw was the flaky ultrasonic rangefinders, but those have since been upgraded in the new 4.1 version of the kit.
   Most of the students had already been introduced to Javascript programming in previous classes, so we wanted to keep using the language to program the RedBoard. The Johnny-Five Node.js library was the obvious solution for us due to its compatibility with a large range of peripherals and excellent documentation. There is also a convenient Chrome app that packages the library with a simple UI. We used this app all through the first round of classes, but frequent bugs and inability to save code to the filesystem drove me to adopt a different approach. I got out a stack of old laptops that I had accumulated over the years and installed Linux and Johnny-Five on them. Students would use the built-in text editor (either mousepad or gedit) to write code, save them to files on the desktop, and drag the files to a special icon that would invoke Johnny-Five to connect to the Arduino and run the program.


   After assembling the parts and software, the next big challenge was to think of an fun projects for the students. Middle school kids lose interest a lot faster than the high schoolers I was used to, especially after a long 6-hour day of classes. I initially started each class with a short slideshow on the technology relevant to the following project, but this failed to make much of an impression. Later, I began class with the first part of the project, and after their interested was piqued by the new sensor  or actuator, I would follow up with an explanation about it that would be needed to finish the rest.


   The projects increased in complexity until the students could assemble a driving robot that used an ultrasonic rangefinder to navigate. For the first course, the robot was supposed to follow a hand that was placed in front of it. This turned out to be a exciting way to physically interact with the robot and learn how to build up complex logic in Javascript. The fun and challenge both increased in the second course. The robot hardware ended up mostly identical, but the assigned task was to navigate through a simple walled course. Teams were encouraged to finish the course in the least amount of time. The sense of competition infused much more energy and focus into the students than ever before. I am still reluctant to admit it, but the prospect of winning or being the best is a crucial motivator for many people.


   Although it was hard work, I thoroughly enjoyed running my first robotics classes. My fellow mentors were extremely helpful and encouraging; the lessons wouldn't have been nearly as effective without their assistance. Though the students were sometimes hard to control, they were truly inspiring with the energy that they brought to the class. Some even engineered novel solutions that I had never considered and asked thought-provoking questions that led me down rabbit holes I wouldn't have otherwise explored. In all seriousness, I'm hopeful that this experience will encourage them to use technology to improve the future.

Wednesday, August 1, 2018

Introduction to Robotics Programming in C++

Despite many distractions and just plain lethargy, work still progresses at a slow pace on the FIRST Robotics Competition C++ learning framework that I introduced previously. I still think that it can provide an accessible and engaging experience for students learning robotics and programming in a way that will teach them how to contribute to the code on a real FRC robot. The RedBot still exists (though it's now sold with a black chassis), and the code that I'm writing to emulate WPILib for it is still here on GitHub. Now that it covers more of the WPILib API, I do want to do some major refactoring to improve the organization and modernize the C++. I also want to investigate integrating the project with GradleRIO somehow to make it easy for others to download and build it and its dependencies. First, though, I thought I should publicly demonstrate some of the framework's capabilities and give an example lesson based on it that starts with basic driving and builds up to following a line drawn on a flat surface.

Driving Forward


The first thing to do is to just make the robot move using its drive motors. In this example, the robot will drive forward by applying an equal amount of power to both motors when it's in Autonomous Mode. In Disabled Mode, it will stop the motors by setting the power to zero. The code to do so is below.

#include <WPILib.h>

class Robot : public frc::IterativeRobot
{
private:

  RedBotSpeedController myLeftMotor;
  RedBotSpeedController myRightMotor;

public:

  Robot() :
    myLeftMotor(0),
    myRightMotor(1)
  {
  }

  void AutonomousInit()
  {
    myLeftMotor.Set(0.6);
    myRightMotor.Set(0.6);
  }

  void DisabledInit()
  {
    myLeftMotor.Set(0.0);
    myRightMotor.Set(0.0);
  }
};

START_ROBOT_CLASS(Robot);

Deploy this program to the robot and enable Autonomous Mode to start the robot driving forward. Be sure to keep a finger on the Disable button so the robot doesn't drive off the table.


Once the robot starts moving, it may not keep a completely straight course, even though equal amounts of power are specified for both motors in the code. This will be addressed in later sections of this tutorial.

Breaking down the Code


Here's an in-depth analysis of the whole code.

#include <WPILib.h>

This includes the code libraries needed to make the robot perform actions as well as to retrieve data from the robot. In this example, we use the speed controller class defined in these libraries to set motor power. All programs for the robot need to begin with this line.

class Robot : public frc::IterativeRobot
{

This begins the robot class, which contains variables for robot parts and methods that perform actions with the variables. There are two main parts to this class declaration: the name of our class, which is simply "Robot", and the public derivation from the base class "frc::IterativeRobot". The Robot class derives from frc::IterativeRobot so that it can use the methods and variables already defined in it. Deriving from this class is also required for every robot program.

private:

  RedBotSpeedController myLeftMotor;
  RedBotSpeedController myRightMotor;

Here, the speed controller variables are declared inside the Robot class. The RedBotSpeedController class represents a controller for the drive motors on the robot. The main purpose of a speed controller in this program is to specify how much power should be applied to its motor. There are two objects of this class in our program: one for the left motor, and one for the right motor. These objects are declared in the "private" section of the Robot class so that nothing outside the Robot class can access them.

public:

  Robot() :
    myLeftMotor(0),
    myRightMotor(1)
  {
  }

Now the "public" section of the Robot class is started. In this section, methods and variables are accessible by code outside the class. The first thing defined here is the Robot class constructor, which is a special method that runs whenever a new object of the Robot class is created. The only thing this constructor does  is to construct the speed controller objects that we declared above. The speed controller constructors require a single numerical argument that specifies the channel on the main control board that they're connected to. In every robot program, the left controller must always be constructed with channel 0, and the right controller must always be constructed with channel 1. This ensures that whatever speed is set for the controller in the following code is applied to the correct physical motor.

  void AutonomousInit()
  {
    myLeftMotor.Set(0.6);
    myRightMotor.Set(0.6);
  }

This is the code that makes the robot move when Autonomous Mode is enabled. The method AutonomousInit is inherited from frc::IterativeRobot (from which this Robot class is derived from, as explained above) and is called once whenever the robot switches to Autonomous Mode from Disabled Mode. In the method's body, the speed controller objects are used to set a speed of 0.6 on both motors. Since 0.6 is a positive number, this will cause the robot to drive forward (a negative number would cause the robot to drive backwards). The robot will continue driving forward at this speed until a new speed is set on the controllers.

  void DisabledInit()
  {
    myLeftMotor.Set(0.0);
    myRightMotor.Set(0.0);
  }
};

To stop the robot when switching to Disabled Mode from Autonomous Mode, zero speed is set for both motors. This causes the motors to stop immediately; no coasting should occur. Just like the AutonomousInit method, the DisabledInit method is inherited from frc::IterativeRobot and runs once whenever the robot is disabled (as well as when the robot program first starts up). The curly brace and semicolon ("};") following the DisabledInit method conclude the Robot class.

START_ROBOT_CLASS(Robot);

Finally, this macro call specifies that the Robot class defined above should be used as the main program for the robot. Again, this line is required for all robot programs.

Turning


Try changing the motor speed values in the above program to make the robot turn rather than drive forward. Which values are needed to make it turn left, and which make it turn right? Which values make the robot turn about its center, and which make it turn about one side?


Using Timers


Instead of driving the robot forever or until it falls off the end of the table or until the Disable button is pressed, it may be helpful to use a timer to figure out when to stop. For this purpose, the Timer class comes in handy. An example of how to use it is shown below.

#include <WPILib.h>

class Robot : public frc::IterativeRobot
{
private:

  RedBotSpeedController myLeftMotor;
  RedBotSpeedController myRightMotor;
  enum DriveState { FORWARD, STOP_FORWARD, BACKWARD, STOP_BACKWARD };
  DriveState myState;
  frc::Timer myTimer;

public:

  Robot() :
    myLeftMotor(0),
    myRightMotor(1)
  {
  }

  void AutonomousInit()
  {
    myTimer.Stop();
    myTimer.Reset();
    myTimer.Start();

    myLeftMotor.Set(0.6);
    myRightMotor.Set(0.6);
  }

  void AutonomousPeriodic()
  {
    if (myTimer.HasPeriodPassed(2.0) == false)
      {
return;
      }

    double speed = 0.0;

    switch (myState)
      {
      case FORWARD:
speed = 0.0;
myState = STOP_FORWARD;
break;

      case STOP_FORWARD:
speed = -0.6;
myState = BACKWARD;
break;

      case BACKWARD:
speed = 0.0;
myState = STOP_BACKWARD;
break;

      case STOP_BACKWARD:
speed = 0.6;
myState = FORWARD;
break;
      };

    myTimer.Stop();
    myTimer.Reset();
    myTimer.Start();

    myLeftMotor.Set(speed);
    myRightMotor.Set(speed);
  }

  void DisabledInit()
  {
    myLeftMotor.Set(0.0);
    myRightMotor.Set(0.0);
  }
};

START_ROBOT_CLASS(Robot);

Just as in the first example, build and deploy this program and enable Autonomous Mode to move the robot. The robot should drive forward for 3 seconds, stop for 1 seconds, drive backward for 3 seconds, and repeat.


Breaking Down the Code


This program is a little more complicated not only because it uses a Timer object, but because it also has a state machine to govern the robot's movement. A state machine is a useful coding pattern whenever the robot has to go through a sequence of steps. It can usually be written using a state variable and a switch statement.

class Robot : public frc::IterativeRobot
{
private:

  RedBotSpeedController myLeftMotor;
  RedBotSpeedController myRightMotor;
  enum DriveState { FORWARD, STOP_FORWARD, BACKWARD, STOP_BACKWARD };
  DriveState myState;
  frc::Timer myTimer;

Just as in the first example, the two speed controllers are declared using RedBotSpeedController objects. Following those, an enum (short for "enumeration") is declared to list all of the possible states that the robot can be in: moving forward, stopping in the forward position, moving backward, and stopping in the backward position. In this program, the robot is meant to cycle through all of these states in the order shown above, using the timer to remain in each state for a certain amount of time. The current state variable myState is declared as a type of the same name as the enum (DriveState). Finally, the timer itself is declared as an object of frc::Timer.

public:

  Robot() :
    myLeftMotor(0),
    myRightMotor(1)
  {
  }

The constructor here initializes both speed controllers. Since the state variable and timer object do not need to be constructed with an argument, they are not listed here.

  void AutonomousInit()
  {
    myTimer.Stop();
    myTimer.Reset();
    myTimer.Start();

    myState = FORWARD;
    myLeftMotor.Set(0.6);
    myRightMotor.Set(0.6);
  }

The AutonomousInit method now has more code in it to set up the timer and state to begin the Autonomous mode. Since the Timer object automatically starts counting from the time that the robot program begins, it must be stopped, reset to zero, and started again every time Autonomous mode is enabled. Following that, the robot state is initialized to FORWARD, meaning that the robot should start driving forward when switching to Autonomous mode. To make that actually happen, a positive speed is set for both motors in the final two lines of this method.

  void AutonomousPeriodic()
  {
    if (myTimer.HasPeriodPassed(2.0) == false)
      {
return;
      }

The AutonomousPeriodic method is run repeatedly for as long as the robot is in Autonomous, as opposed to the AutonomousInit method, which runs just once right when the mode is enabled. The first thing to do in this method is to check the timer. If it has not yet counted past 2 seconds, then it returns immediately; nothing else in this method is executed. When the timer does count 2 seconds, then the program will continue on to the following lines.

    double speed = 0.0;

    switch (myState)
      {
      case FORWARD:
speed = 0.0;
myState = STOP_FORWARD;
break;

      case STOP_FORWARD:
speed = -0.6;
myState = BACKWARD;
break;

      case BACKWARD:
speed = 0.0;
myState = STOP_BACKWARD;
break;

      case STOP_BACKWARD:
speed = 0.6;
myState = FORWARD;
break;

      };

Here is the main code for the state machine mentioned previously. Every two seconds while the robot is in Autonomous mode, this switch statement will check the current state and decide the next state to switch to and change the motor speed at the same time. For example, since the robot begins in the FORWARD state, two seconds after starting Autonomous mode, it will change to the state STOP_FORWARD and set the speed to zero. Two seconds later, it will change to BACKWARD and change the speed to -0.6. Eventually it will reach the FORWARD state again and cycle between driving forward, stopping, driving backward, and stopping until the robot is disabled.

    myTimer.Stop();
    myTimer.Reset();
    myTimer.Start();

    myLeftMotor.Set(speed);
    myRightMotor.Set(speed);

These lines following the switch statement reset the timer every two seconds and apply the new speed to the speed controllers.

  void DisabledInit()
  {
    myLeftMotor.Set(0.0);
    myRightMotor.Set(0.0);
  }

Just as before, the robot should stop whenever it's disabled.

Detecting a Line


In addition to driving, programs can also read data from sensors connected to the robot. For example, infrared sensors can be used to detect if a nearby object is light or dark in color. When a voltage is applied to the supply input of the sensor, the voltage that it returns varies depending upon the amount of light that is reflected into its receiver: the voltage is high when it receives less light, and low when it receives more light.



Attach three infrared sensors to the bottom of the front of the robot like in the picture below. Be sure that they are facing down and are within a couple centimeters of the table surface (also ensure they don't actually touch the surface).


Wire the sensors to the analog inputs 3, 6, and 7 on the control board as shown below.


The following code can be used to continuously read values from the sensors and display them on the SmartDashboard.

#include <WPILib.h>

class Robot : public frc::IterativeRobot
{
private:

  frc::AnalogInput myLeftSensor;
  frc::AnalogInput myMiddleSensor;
  frc::AnalogInput myRightSensor;

public:

  Robot() :
    myLeftSensor(3),
    myMiddleSensor(6),
    myRightSensor(7)
  {
    frc::SmartDashboard::init();
  }

  void DisabledInit()
  {
  }

  void AutonomousInit()
  {
  }

  void AutonomousPeriodic()
  {
    frc::SmartDashboard::PutNumber("Left Sensor", myLeftSensor.Get());
    frc::SmartDashboard::PutNumber("Middle Sensor", myMiddleSensor.Get());
    frc::SmartDashboard::PutNumber("Right Sensor", myRightSensor.Get());
  }
};

START_ROBOT_CLASS(Robot);

Build and deploy this program to the robot and enable Autonomous mode. Then, start up SmartDashboard (make sure that it's using the server at localhost or 127.0.0.1). There should be three number fields visible. Change the fields to dials, and the SmartDashboard should look something like the screenshot below.


Breaking Down the Code


Sensors can be used in code much like how speed controllers were used in the previous examples. The first step is to declare them as variables in the Robot class:

class Robot : public frc::IterativeRobot
{
private:

  frc::AnalogInput myLeftSensor;
  frc::AnalogInput myMiddleSensor;
  frc::AnalogInput myRightSensor;

The three infrared sensors are declared as analog sensors because they return numeric, non-binary values; the possible values range from 0 to 1023. If a sensor could only return either a 0 or a 1, then it would be declared as a digital sensor.

  Robot() :
    myLeftSensor(3),
    myMiddleSensor(6),
    myRightSensor(7)
  {
    frc::SmartDashboard::init();
  }

Just like speed controllers, sensors have to be constructed with the numbers of the control board port they're connected to. Also in this constructor is an initialization call for the SmartDashboard. This is needed to be able to send and receive data from the SmartDashboard later in the robot program.

  void DisabledInit()
  {
  }

  void AutonomousInit()
  {
  }

Notice how both the DisabledInit() and AutonomousInit() methods are both empty in this new program. That's because there is no nothing to do just once whenever the robot changes modes. Instead, the sensors must be read continuously in the AutomousPeriodic method below.

  void AutonomousPeriodic()
  {
    frc::SmartDashboard::PutNumber("Left Sensor", myLeftSensor.Get());
    frc::SmartDashboard::PutNumber("Middle Sensor", myMiddleSensor.Get());
    frc::SmartDashboard::PutNumber("Right Sensor", myRightSensor.Get());
  }

Every time this periodic method runs, all three infrared sensors are read, and their current values are put on the SmartDashboard using the PutNumber function. This function takes a label that describes what the data is and the current value that should be shown next to that label. For different types of data (other than numbers) that must be sent to the SmartDashboard, the PutBoolean() and PutString() functions are also available.

Detecting a Line


With the above sensor program running on the robot and SmartDashboard running on the driver station, manually move the robot so that one of the sensors is above a dark surface and the others above a light surface. How do the sensor readings change? Repeat this test for each of the three sensors. Do they all change to the same values? Are the readings affected by the ambient light in the room?


For the later activities, it will be important to determine if a sensor is above a dark line drawn on a white surface. That means that the analog sensor value needs to be converted to a digital value: 0 (false) for being off a line and 1 (true) for being on a line. Write a new method to perform this conversion, and use it to publish the digital values to the SmartDashboard. The SmartDashboard should eventually look like the screenshot below.


Following a Straight Line


Most two-motor robots are often unable to keep a straight path just by applying equal power to both sides for very long. As shown in the first experiment, the robot soon veers off to one side or zig-zags from side to side. This is caused by several factors, including imperfections in the drivetrain, deformities in the driving surface, and unequal distribution of electrical power to the motors.

Feedback from sensors can be used to overcome these obstacles. In this activity, a thick, straight black line on the surface will serve to guide the robot on the correct course. Using the skills learned in the previous examples, write a robot program that automatically adjusts the power to the motors depending on which of the infrared sensors see or don't see the line. For example, if the left sensor does not see the line, but the middle and right ones do, which way should the robot turn? How quickly should it turn? What should each of the motors' speeds be to accomplish that turn?


As a suggestion, begin with low cruising speed for the motors. This will make it easier to judge if the robot is seeing and following the line correctly and to catch it if it becomes lost. Also, it may help to either log the sensor readings and other program variables to a file on the driver station or continuously publish them to the SmartDashboard, or both. Keep in mind that values can also be read from the SmartDashboard; this makes it very easy to quickly try out different sets of constants for tuning a program without having to recompile and restart the robot.

Following a Line With a Turn


Once the robot can follow a straight line, the final step is to handle a sharp turn in the line of at least 90 degrees. Any misstep in the program at the wrong moment can now throw the robot completely off the line and cause it to become lost.


One approach to this problem is to augment the sensor-feedback-drive loop with some special logic for when the robot arrives at the turn. This code could cause the robot to follow a specific sequence of steps to get it to force itself through the turn and continue onto the next straight segment. Remember that a state machine, like the one described in the timed driving example above, can be used to encode these steps in the program.