Introduction to “oc” — the OpenShift Command Line Power Tool

Photo by Marissa Daeger on Unsplash

If you use Kubernetes/OpenShift in any capacity, you have no doubt seen commands that start with oc being used to interact with the system. The oc tool can do a lot, and is pretty intuitive to use once you understand a few things.

I’m going to give a brief background/overview here about Kubernetes/OpenShift and then get into what oc does and how to use it. From this point on I will just refer to "OpenShift" and oc, but the vast majority of this article will also apply to Kubernetes and kubectl.

OpenShift has a RESTful API that is used for configuration

At the heart of OpenShift is an etcd database that contains a bunch of object which define the current and desired state of the system. The object types are defined by controllers that operate on them. Much like any application, each object type is a representation of a real world thing. In the case of OpenShift it is container infrastructure that is being represented: containers, load balancers, routers, that sort of thing. The API is highly extensible so there is no limit to the number of types and objects you use or create.

Rather than reading and writing to the etcd database directly (which would be highly error prone, and incredibly insecure), users interact with the OpenShift database through a RESTful API that performs all the authentication, access control, and integrity checking needed for the system. The API has numerous endpoints available for reading and manipulating objects. Since it is a normal HTTP-based REST API, it can be accessed programmatically though either user interfaces or command line tools (which translate user actions into API calls). oc is the official command line tool that provides this.

In addition to providing a human-friendly interface to the OpenShift API, the oc command line tool can do other things that make working with the system easier. It will keep track of sessions and configuration, as well as act as a source of documentation.

Getting up and running

First, you should download the version of oc that matches your OpenShift cluster. You can find specific versions on the OpenShift mirror site.

Logging in to the cluster API

Once you have the oc binary, you need to login to the API server for your cluster. This process will get you an API token you can use for future requests. The oc tool will manage this token for you, but you could also extract it and use it with direct HTTP calls if you so desire.

Copy/paste your API server URL and login like this:

If my username was “freedomben” and my API server was on, my login command would be:

Getting help from the tool

If you don’t know the command you are looking for — help is your friend. You can use it at each level of the command. To see top-level commands available:

Don’t worry about these verbs yet, but you can also use it for sub menus, for example:

If you get stuck at any time, remember the --help flag is there for you.

Help on steroids, aka powerhouse help

When you get to editing properties in the objects, you can also drill down into relevant documentation using oc explain.

For example, the Deployment resource:

You can drill down with object notation as far as you’d like. If you don’t understand this yet, that’s ok, just remember it exists and come back to read this later:

This tool is invaluable when working with the YAML files that define individual properties. I reference it continually while working on object definitions or to see what the significance is of a particular field.

Basic command structure

The basic structure of oc commands is: oc <verb> <noun> [name]

The verbs can be any of a handful of supported verbs such as get, describe, create, edit, delete, apply, set, patch, label, annotate, expose, scale, autoscale, and more.

The nouns are the type of the object being acted on. With oc you can use either the singular or plural form of the noun, such as pod or pods. Those will get you the same thing. Additionally many objects have a short form, which comes in very handy if the type is long. For example pod can be po, deployment can be deploy, configmaps can be cm, and so on.

There are hundreds of possible nouns, although in practice only about a dozen you’re likely to use. Some of the most common are pods, deployments, services, routes, configmaps, secrets, etc.

You can easily get a list of available nouns from oc itself, by running:

This command will give you the long name, short name, and API group.

The [name] argument is optional depending on the command and whether it makes sense. For example with the “get” verb, leaving the name off will retrieve a list of all the objects of that type, in the current (or specified) namespace. To retrieve a specific object, give it’s name.

An example with the “get” verb and the “pod” noun would be:

To get a specific pod, you need to know it’s name (see the list from the previous command), then you can query for it directly:

Retrieving data

The “get” and “describe” verbs will be used very often. The syntax is such that the verb will vary but the arguments will stay the same. For example, to get a pod named “my-awesome-pod”:

To describe that pod, just replace “get” with “describe”:

Getting objects in YAML or JSON format

When defining objects, we work with YAML or JSON (either one works, it’s your preference). As a result, it is common to want to see the YAML representation of an object directly. The -o flag on the get verb allows that:

You may want to save this to a file, which you can edit and apply using the apply verb (covered next):

Creating data

While there are imperative-style commands, the far more common way to create resources is to take a YAML file with the object’s definition and apply it through the API:

Most people prefer the apply verb to create as it will function idempotently, creating objects that don't exist or updating those that do. This is usually what you want:

Deploying something real

So far we’ve been doing with general abstract scenarios. Let’s use oc to manage an actual deployment of an application so we can see how this works in the real world.

The sample application just serves up an index HTML page and has a health check endpoint. You can find the source code for it here:

Deploying from YAML files

To deploy the sample application, let’s clone the git repo so we can make use of the pre-written YAML files. (How I created these will be covered in a future post, so stay tuned):

If you don’t have git installed, you can instead download the current version of the file here:

Before we use the file, let’s open it up and take a look at the YAML inside of it. It declares a few objects: Deployment, Service, Route. These are the basic objects required to stand up an application. Understanding the details here is outside the scope of this article. It’s important to understand that the YAML that defines our objects exists in this file. For your application you should have a file (or many files if your application is complex) similar to this.

Now that we have reviewed our YAML file we are ready to deploy it.

It’s best practice to deploy applications to a project or namespace to help keep them organized. There are a number of considerations regarding namespaces but they are outside of the scope of this article. To read more about sensible ways to organize, I would recommend OpenShift Namespace Configuration Management.

Let’s first create a new project (or namespace) in OpenShift in which to create our resources and deploy our application. We will use the new-project command to achieve this:

Notice that oc new-project not only created a project for us, but it also switched us to that project. That means that without an explicit namespace in the command, the command will apply to the current namespace. If we need to run a command against a different namespace we can either switch to that namespace with:

Or we can include a -n <namespace> argument to whatever command we are running, for example:

Now let’s deploy our file using the apply verb with the -f flag specifying that input is coming from a file:

Let’s explore the resources that were created. We will use the get verb with the all noun to retrieve a list of objects in the current project:

If you used an existing project, you can limit the scope of the retrieval with the app label (which our YAML defines for the resources we created):

You can see that we got our Deployment, Service, and Route that we created. However we also got some Pods and a ReplicaSet that we didn’t create ourselves. Where did they come from?

Some resources will create other resources to help them do their job. The Deployment that we created is what created the ReplicaSet. That ReplicaSet in turn created the Pods that we see. These Pods were created using the template that we provided in the YAML file. Because the ReplicaSet and Pods are owned/managed by the Deployment, we don’t want to modify them directly (beyond just experimenting). Most of the time the owning resource will change things back, but not all controllers/operators are written perfectly.

If you look at the pods oc get pods you should see that there is only one. That is our running container of our app. Normally you would want at least 2 for high availability. So let's scale our app up to two replicas using the scale verb that operates on Deployment objects (it works on numerous other objects too but currently we're using it for our deployment):

Notice we had to tell it which type to operate on (deployment) as well as the name of the object of that type (basic-ocp-demo).

You should now see that the READY state is x/2 and soon UP-TO-DATE and AVAILABLE will be 2:

Let’s think about what the scale verb did though. We know that the desired state is just objects in the etcd database, manipulated through REST API. So even though it felt like we were commanding the system to scale, in reality we were just updating the replicas property in our Deployment object.

Let’s take a look at the Deployment object in YAML form:

You can also use jsonpath to extract the value we care about (this is very useful for scripting for example):

You can also use jsonpath to extract the value we care about (this is very useful for scripting for example):

Now that we understand what happened, we can use the incredibly useful verb patch to achieve the same thing. As you start to automate more around OpenShift, you will find that patch is fantastic for use in scripts.

Let’s scale our replicas back down to one using patch:

This is one example of patch, but the command supports several different formats and options. You can even put patches in a JSON or YAML file and pass that file in to the patch command. Once you feel you have your sea legs, you should read more about it. Even just oc patch --help is really useful.

Let’s also try out the edit verb. With edit oc will pull down the latest version of the object and open it in $EDITOR for you to modify. When you save and exit, oc will see if you made any changes to the object, and if so it will update the object in OpenShift. Let’s scale back to two replicas once again using edit:

So far we’ve looked at the resources but we haven’t actually done anything with the app itself. This sample app has an HTML/browser endpoint and a JSON endpoint. Let’s do a “get” on the Route object and get the URL for the app:

You should see the “HOST” there which is a URL you can hit with the browser. The protocl (http(s)) is missing though, and there is a little extra information. We can easily use oc’s jsonpath (mentioned above) and a bit of bash to build a clickable URL for us. To figure out the right jsonpath you may wish to use the -o yaml option so you can visualize the object.

Here’s how we can get that clickable URL in our terminal:

Now using that command, here’s how we can use a bit of bash to make a clickable URL in our terminal:

(You may need to Ctrl+click to get it to open in a browser depending on your terminal application)

We can also query the health check JSON endpoint with curl from outside of the cluster using that URL:

So far we have created and we have modified. Now let’s try deleting a resource. Remember that we didn’t (directly) create the pods. They were created by a ReplicaSet, which itself was created by our Deployment. The reason for using a ReplicaSet instead of directly creating a Pod, is that if a Pod dies due to an application crash (or something else) or if we want to scale up (such as by increasing the number of replicas), we want the system to recognize that current state != desired state and automatically spin up a new pod to replace it.

Let’s delete a pod and see a new one get spun up for us by the ReplicaSet. In one terminal window you can run oc get pods -w to watch the pods change as you run this delete command (substitute in the pod name of your pod):

Now run oc get pods and see that the old one is either gone or terminating, and a new one is spinning up. If we didn't have a ReplicaSet to replace this pod, it would simply be gone, so ReplicaSets are an important feature.

Should you find yourself needing to restart a pod/container, this is typically the approach many would recommend (deleting the older pods and letting new ones take their place). As with many actions however, just consider the environment you are operating in and whether a pod disappearing suddenly will cause problems.

If you are running the application in production and there are not enough replicas to handle the load should one disappear, you should scale up before deleting a Pod! (Generally speaking you should always have enough replicas that one or two can die without the application going down, but that’s a topic for another article).

If you were to delete the Deployment object, the ReplicaSet it owns would also get deleted. When the ReplicaSet gets deleted, OpenShift will also delete resources owned by it, which includes the Pods. As you can see, the concept of ownership in OpenShift is important.

You should continue playing with the resources here until you are feeling comfortable. When you are ready, you can use the “delete” verb with a label selection to easily delete all the resources we created for this exercise.

Use the “delete” verb on the “all” resource using the label that is on each resource. Here we are telling OpenShift to delete all objects with the app label set to basic-ocp-demo:

You could of course delete each resource one at a time if you like, but I prefer the label approach.

Deploying magically: Using OpenShift’s s2i feature

If you’ve used some PaaS (Platform as a Service) offerings you may have experience the ability to deploy an app and have the system build the container for you. Heroku is a popular example of this. OpenShift has a feature that allows a similar workflow. The nice thing about it on OpenShift is you are able to introspect and see what the system created for you. You can save it, change it, etc.

oc has a "new-app" command that can take existing repos and deploy them as containers running on OpenShift. The application is required to conform to one of the supported runtimes just as other PaaSes do.

The easiest supported runtime which works with basically every project, is to just have a Dockerfile at the top level of the repository. When OpenShift inspects the repository to determine the most appropriate runtime, it will see the Dockerfile undertake to build it. This is by far the most flexible option as nearly everything can run/build in a container.

Once it has determined the build strategy, OpenShift will then create several OpenShift objects that will build and deploy the image of your application. Details of how this process works are out of scope, but fortunately our sample application supports both Dockerfile and ruby s2i, so you can try out both. For this exercise we’ll just do the Dockerfile strategy, but afterward I would encourage you to see if you can get it building on the Ruby s2i image.

We’ll use the same sample application as before, but instead of cloning it locally and deploying it from the YAML file, we’ll create a project for it and pass the git repo URL to oc and let 'er rip. There are quite a few arguments/switches you can pass to oc new-app to change the behavior, but we'll keep things simple for now:

By default, OpenShift will build and deploy the master branch. If we wanted to deploy the production branch instead, we can tell OpenShift by appending #<branch-name> to the end of the git URL:

Once the application is build and deployed, and the Service object is created (all things that oc new-app will do for us), we can use the "expose" verb to create a Route object for our Service object:

This will create a Route object for us that points to the Service object created by oc new-app. oc new-app also created a number of objects that are useful but outside the scope of this article. However you can easily take a look at what was created by querying:

If the namespace has other resources not related to this app, you can add a label query to limit the values returned:

You should play around with some of these objects, reading them, changing them, etc.

When you are done, delete all the resources:

In closing

You may be feeling a little overwhelmed with all of this, but once you have written a few of these it will start to be intuitive. It’s not terribly different than learning a simple programming language. You need to practice!

A good command line is hard to beat for efficiency and power, and the oc command is no exception. Time invested mastering the tool will be repaid many times over in the future.

Ben Porter is a Linux and open source advocate, currently working as an OpenShift consultant for Red Hat.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store