# Linear Equations

The equations in the previous lab included one variable, for which you solved the equation to find its value. Now let’s look at equations with multiple variables. For reasons that will become apparent, equations with two variables are known as linear equations.

## Solving a Linear Equation

Consider the following equation:

\(\begin{equation}2y + 3 = 3x - 1 \end{equation}\)

This equation includes two different variables, **x** and **y**. These variables depend on one another; the value of x is determined in part by the value of y and vice-versa; so we can’t solve the equation and find absolute values for both x and y. However, we *can* solve the equation for one of the variables and obtain a result that describes a relative relationship between the variables.

For example, let’s solve this equation for y. First, we’ll get rid of the constant on the right by adding 1 to both sides:

\(\begin{equation}2y + 4 = 3x \end{equation}\)

Then we’ll use the same technique to move the constant on the left to the right to isolate the y term by subtracting 4 from both sides:

\(\begin{equation}2y = 3x - 4 \end{equation}\)

Now we can deal with the coefficient for y by dividing both sides by 2:

\(\begin{equation}y = \frac{3x - 4}{2} \end{equation}\)

Our equation is now solved. We’ve isolated **y** and defined it as ^{3x-4}/_{2}

While we can’t express **y** as a particular value, we can calculate it for any value of **x**. For example, if **x** has a value of 6, then **y** can be calculated as:

\(\begin{equation}y = \frac{3\cdot6 - 4}{2} \end{equation}\)

This gives the result ^{14}/_{2} which can be simplified to 7.

You can view the values of **y** for a range of **x** values by applying the equation to them using the following R code:

```
# Create a dataframe with an x column containing values from -10 to 10
df = data.frame(x = seq(-10, 10))
# Add a y column by applying the solved equation to x
df$y = (3*df$x - 4) / 2
#Display the dataframe
df
```

```
## x y
## 1 -10 -17.0
## 2 -9 -15.5
## 3 -8 -14.0
## 4 -7 -12.5
## 5 -6 -11.0
## 6 -5 -9.5
## 7 -4 -8.0
## 8 -3 -6.5
## 9 -2 -5.0
## 10 -1 -3.5
## 11 0 -2.0
## 12 1 -0.5
## 13 2 1.0
## 14 3 2.5
## 15 4 4.0
## 16 5 5.5
## 17 6 7.0
## 18 7 8.5
## 19 8 10.0
## 20 9 11.5
## 21 10 13.0
```

We can also plot these values to visualize the relationship between x and y as a line. For this reason, equations that describe a relative relationship between two variables are known as linear equations:

```
library(ggplot2)
library(repr)
options(repr.plot.width=4, repr.plot.height=4)
ggplot(df, aes(x,y)) + geom_point() + geom_line(color = 'blue')
```

In a linear equation, a valid solution is described by an ordered pair of x and y values. For example, valid solutions to the linear equation above include: - (-10, -17) - (0, -2) - (9, 11.5)

The cool thing about linear equations is that we can plot the points for some specific ordered pair solutions to create the line, and then interpolate the x value for any y value (or vice-versa) along the line.

## Intercepts

When we use a linear equation to plot a line, we can easily see where the line intersects the X and Y axes of the plot. These points are known as *intercepts*. The *x-intercept* is where the line intersects the X (horizontal) axis, and the *y-intercept* is where the line intersects the Y (horizontal) axis.

Let’s take a look at the line from our linear equation with the X and Y axis shown through the origin (0,0).

```
ggplot(df, aes(x,y)) + geom_point() + geom_line(color = 'blue') +
geom_hline(yintercept=0) + geom_vline(xintercept=0)
```

The x-intercept is the point where the line crosses the X axis, and at this point, the **y** value is always 0. Similarly, the y-intercept is where the line crosses the Y axis, at which point the **x** value is 0. So to find the intercepts, we need to solve the equation for **x** when **y** is 0.

For the x-intercept, our equation looks like this:

\(\begin{equation}0 = \frac{3x - 4}{2} \end{equation}\)

Which can be reversed to make it look more familar with the x expression on the left:

\(\begin{equation}\frac{3x - 4}{2} = 0 \end{equation}\)

We can multiply both sides by 2 to get rid of the fraction:

\(\begin{equation}3x - 4 = 0 \end{equation}\)

Then we can add 4 to both sides to get rid of the constant on the left:

\(\begin{equation}3x = 4 \end{equation}\)

And finally we can divide both sides by 3 to get the value for x:

\(\begin{equation}x = \frac{4}{3} \end{equation}\)

Which simplifies to:

\(\begin{equation}x = 1\frac{1}{3} \end{equation}\)

So the x-intercept is 1^{1}/_{3} (approximately 1.333).

To get the y-intercept, we solve the equation for y when x is 0:

\(\begin{equation}y = \frac{3\cdot0 - 4}{2} \end{equation}\)

Since 3 x 0 is 0, this can be simplified to:

\(\begin{equation}y = \frac{-4}{2} \end{equation}\)

-4 divided by 2 is -2, so:

\(\begin{equation}y = -2 \end{equation}\)

This gives us our y-intercept, so we can plot both intercepts on the graph:

```
ggplot(df, aes(x,y)) + geom_line(color = 'blue') +
geom_hline(yintercept=0) + geom_vline(xintercept=0) +
annotate("text", x = 3, y = -2, label = "y-intercept")+
annotate("text", x = 5, y = 1, label = "x-intercept")
```

## Slope

It’s clear from the graph that the line from our linear equation describes a slope in which values increase as we travel up and to the right along the line. It can be useful to quantify the slope in terms of how much **x** increases (or decreases) for a given change in **y**. In the notation for this, we use the greek letter Δ (*delta*) to represent change:

\(\begin{equation}slope = \frac{\Delta{y}}{\Delta{x}} \end{equation}\)

Sometimes slope is represented by the variable ** m**, and the equation is written as:

\(\begin{equation}m = \frac{y_{2} - y_{1}}{x_{2} - x_{1}} \end{equation}\)

Although this form of the equation is a little more verbose, it gives us a clue as to how we calculate slope. What we need is any two ordered pairs of x,y values for the line - for example, we know that our line passes through the following two points: - (0,-2) - (6,7)

We can take the x and y values from the first pair, and label them x_{1} and y_{1}; and then take the x and y values from the second point and label them x_{2} and y_{2}. Then we can plug those into our slope equation:

\(\begin{equation}m = \frac{7 - -2}{6 - 0} \end{equation}\)

This is the same as:

\(\begin{equation}m = \frac{7 + 2}{6 - 0} \end{equation}\)

That gives us the result ^{9}/_{6} which is 1^{1}/_{2} or 1.5 .

So what does that actually mean? Well, it tells us that for every change of **1** in x, **y** changes by 1^{1}/_{2} or 1.5. So if we start from any point on the line and move one unit to the right (along the X axis), we’ll need to move 1.5 units up (along the Y axis) to get back to the line.

You can plot the slope onto the original line with the following R code to verify it fits:

```
line = data.frame(x = c(0,1.5), y = c(-2,0))
ggplot() + geom_line(data = df, aes(x,y),color = 'blue') +
geom_hline(yintercept=0) + geom_vline(xintercept=0) +
geom_line(data = line, aes(x,y), color = 'red', size = 3)
```

### Slope-Intercept Form

One of the great things about algebraic expressions is that you can write the same equation in multiple ways, or *forms*. The *slope-intercept form* is a specific way of writing a 2-variable linear equation so that the equation definition includes the slope and y-intercept. The generalized slope-intercept form looks like this:

In this notation, ** m** is the slope and

**is the y-intercept.**

*b*For example, let’s look at the solved linear equation we’ve been working with so far in this section:

\[\begin{equation}y = \frac{3x - 4}{2} \end{equation}\]Now that we know the slope and y-intercept for the line that this equation defines, we can rewrite the equation as:

\[\begin{equation}y = 1\frac{1}{2}x + -2 \end{equation}\]You can see intuitively that this is true. In our original form of the equation, to find y we multiply x by three, subtract 4, and divide by two - in other words, x is half of 3x - 4; which is 1.5x - 2. So these equations are equivalent, but the slope-intercept form has the advantages of being simpler, and including two key pieces of information we need to plot the line represented by the equation. We know the y-intercept that the line passes through (0, -2), and we know the slope of the line (for every x, we add 1.5 to y.

Let’s recreate our set of test x and y values using the slope-intercept form of the equation, and plot them to prove that this describes the same line:

```
## Make a data frame with the x values
df = data.frame(x = seq(-10,10))
## Add the y values using the formula y = mx + b
m = 1.5
b = -2
df$y = m * df$x + b
## Plot the result
ggplot() + geom_line(data = df, aes(x,y),color = 'blue') +
geom_hline(yintercept=0) + geom_vline(xintercept=0) +
geom_line(data = line, aes(x,y), color = 'red', size = 3) +
annotate("text", x = 3, y = -2, label = "y-intercept")
```

## Systems of Equations

### Systems of Equations

Imagine you are at a casino, and you have a mixture of £10 and £25 chips. You know that you have a total of 16 chips, and you also know that the total value of chips you have is £250. Is this enough information to determine how many of each denomination of chip you have?

Well, we can express each of the facts that we have as an equation. The first equation deals with the total number of chips - we know that this is 16, and that it is the number of £10 chips (which we’ll call ** x** ) added to the number of £25 chips (

**).**

*y*The second equation deals with the total value of the chips (£250), and we know that this is made up of ** x** chips worth £10 and

**chips worth £25.**

*y*Here are the equations

\(\begin{equation}x + y = 16 \end{equation}\) \(\begin{equation}10x + 25y = 250 \end{equation}\)

Taken together, these equations form a *system of equations* that will enable us to determine how many of each chip denomination we have.

## Graphing Lines to Find the Intersection Point

One approach is to determine all possible values for x and y in each equation and plot them.

A collection of 16 chips could be made up of 16 £10 chips and no £25 chips, no £10 chips and 16 £25 chips, or any combination between these.

Similarly, a total of £250 could be made up of 25 £10 chips and no £25 chips, no £10 chips and 10 £25 chips, or a combination in between.

Let’s plot each of these ranges of values as lines on a graph:

`library(ggplot2)`

```
## Create a data frames with the extremes of the possible values of chips
chips = data.frame(x = c(16,0), y = c(0,16))
## A second data frame with the extreems of the values of the chips
values = data.frame(x = c(25,0), y = c(0,10))
ggplot() + geom_line(data = chips, aes(x,y), color = 'blue', size = 1) +
geom_line(data = values, aes(x,y), color = 'orange', size = 1)
```

Looking at the graph, you can see that there is only a single combination of £10 and £25 chips that is on both the line for all possible combinations of 16 chips and the line for all possible combinations of £250. The point where the line intersects is (10, 6); or put another way, there are ten £10 chips and six £25 chips.

### Solving a System of Equations with Elimination

You can also solve a system of equations mathematically. Let’s take a look at our two equations:

\(\begin{equation}x + y = 16 \end{equation}\) \(\begin{equation}10x + 25y = 250 \end{equation}\)

We can combine these equations to eliminate one of the variable terms and solve the resulting equation to find the value of one of the variables. Let’s start by combining the equations and eliminating the x term.

We can combine the equations by adding them together, but first, we need to manipulate one of the equations so that adding them will eliminate the x term. The first equation includes the term ** x**, and the second includes the term

**, so if we multiply the first equation by -10, the two x terms will cancel each other out. So here are the equations with the first one multiplied by -10:**

*10x*\(\begin{equation}-10(x + y) = -10(16) \end{equation}\) \(\begin{equation}10x + 25y = 250 \end{equation}\)

After we apply the multiplication to all of the terms in the first equation, the system of equations look like this:

\(\begin{equation}-10x + -10y = -160 \end{equation}\) \(\begin{equation}10x + 25y = 250 \end{equation}\)

Now we can combine the equations by adding them. The ** -10x** and

**cancel one another, leaving us with a single equation like this:**

*10x*\(\begin{equation}15y = 90 \end{equation}\)

We can isolate ** y** by dividing both sides by 15:

\(\begin{equation}y = \frac{90}{15} \end{equation}\)

So now we have a value for ** y**:

\(\begin{equation}y = 6 \end{equation}\)

So how does that help us? Well, now we have a value for ** y** that satisfies both equations. We can simply use it in either of the equations to determine the value of

**. Let’s use the first one:**

*x*\(\begin{equation}x + 6 = 16 \end{equation}\)

When we work through this equation, we get a value for ** x**:

\(\begin{equation}x = 10 \end{equation}\)

So now we’ve calculated values for ** x** and

**, and we find, just as we did with the graphical intersection method, that there are ten £10 chips and six £25 chips.**

*y*You can run the following R code to verify that the equations are both true with an ** x** value of 10 and a

**value of 6.**

*y*```
x = 10
y = 6
(x + y == 16) & ((10*x) + (25*y) == 250)
```

`## [1] TRUE`

## Exponentials, Radicals, and Logs

Up to this point, all of our equations have included standard arithmetic operations, such as division, multiplication, addition, and subtraction. Many real-world calculations involve exponential values in which numbers are raised by a specific power.

#### $ Exponentials$

A simple case of of using an exponential is squaring a number; in other words, multipying a number by itself. For example, 2 squared is 2 times 2, which is 4. This is written like this:

\(\begin{equation}2^{2} = 2 \cdot 2 = 4\end{equation}\)

Similarly, 2 cubed is 2 times 2 times 2 (which is of course 8):

\(\begin{equation}2^{3} = 2 \cdot 2 \cdot 2 = 8\end{equation}\)

In R, you use the ****** operator, like this example in which **x** is assigned the value of 5 raised to the power of 3 (in other words, 5 x 5 x 5, or 5-cubed):

```
x <- 5**3
print(x)
```

`## [1] 125`

Multiplying a number by itself twice or three times to calculate the square or cube of a number is a common operation, but you can raise a number by any exponential power. For example, the following notation shows 4 to the power of 7 (or 4 x 4 x 4 x 4 x 4 x 4 x 4), which has the value:

\(\begin{equation}4^{7} = 16384 \end{equation}\)

In mathematical terminology, **4** is the *base*, and **7** is the *power* or *exponent* in this expression.

## Radicals (Roots)

While it’s common to need to calculate the solution for a given base and exponential, sometimes you’ll need to calculate one or other of the elements themselves. For example, consider the following expression:

\(\begin{equation}?^{2} = 9 \end{equation}\)

This expression is asking, given a number (9) and an exponent (2), what’s the base? In other words, which number multipled by itself results in 9? This type of operation is referred to as calculating the *root*, and in this particular case it’s the *square root* (the base for a specified number given the exponential **2**). In this case, the answer is 3, because 3 x 3 = 9. We show this with a **√** symbol, like this:

\(\begin{equation}\sqrt{9} = 3 \end{equation}\)

Other common roots include the *cube root* (the base for a specified number given the exponential **3**). For example, the cube root of 64 is 4 (because 4 x 4 x 4 = 64). To show that this is the cube root, we include the exponent **3** in the **√** symbol, like this:

\(\begin{equation}\sqrt[3]{64} = 4 \end{equation}\)

We can calculate any root of any non-negative number, indicating the exponent in the **√** symbol.

The R **sqrt** function calculates the square root of a number. To calculate other roots, you need to reverse the exponential calculation by raising the given number to the power of 1 divided by the given exponent:

```
## calculate and display the square root of 25
x = sqrt(25)
print(x)
```

`## [1] 5`

```
## calculate and display the cube root of 64
cr = 64**(1/3)
print(cr)
```

`## [1] 4`

The code used in R to calculate roots other than the square root reveals something about the relationship between roots and exponentials. The exponential root of a number is the same as that number raised to the power of 1 divided by the exponential. For example, consider the following statement:

\(\begin{equation} 8^{\frac{1}{3}} = \sqrt[3]{8} = 2 \end{equation}\)

Note that a number to the power of 1/3 is the same as the cube root of that number.

Based on the same arithmetic, a number to the power of 1/2 is the same as the square root of the number:

\(\begin{equation} 9^{\frac{1}{2}} = \sqrt{9} = 3 \end{equation}\)

You can see this for yourself with the following R code:

`print(9**0.5)`

`## [1] 3`

`print(sqrt(9))`

`## [1] 3`

## Logarithms

Another consideration for exponential values is the requirement occassionally to determine the exponent for a given number and base. In other words, how many times do I need to multiply a base number by itself to get the given result. This kind of calculation is known as the *logarithm*.

For example, consider the following expression:

\(\begin{equation}4^{?} = 16 \end{equation}\)

In other words, to what power must you raise 4 to produce the result 16?

The answer to this is 2, because 4 x 4 (or 4 to the power of 2) = 16. The notation looks like this:

\(\begin{equation}log_{4}(16) = 2 \end{equation}\)

In R, you can calculate the logarithm of a number of a specified base using the **logb** function, indicating the number and the base:

```
x = logb(16, 4)
print(x)
```

`## [1] 2`

The final thing you need to know about exponentials and logarithms is that there are some special logarithms:

The *common* logarithm of a number is its exponential for the base **10**. You’ll occassionally see this written using the usual *log* notation with the base omitted:

\(\begin{equation}log(1000) = 3 \end{equation}\)

Another special logarithm is something called the *natural log*, which is a exponential of a number for base ** e**, where

**is a constant with the approximate value 2.718. This number occurs naturally in a lot of scenarios, and you’ll see it often as you work with data in many analytical contexts. For the time being, just be aware that the natural log is sometimes written as**

*e***:**

*ln*\(\begin{equation}log_{e}(64) = ln(64) = 4.1589 \end{equation}\)

The **log** function in R returns the natural log (base ** e**) when no base is specified. To return the base 10 or common log in R, use the

**log10**function:

```
## Natural log of 29
log(29)
```

`## [1] 3.367`

```
## Base 10 log of 100
log10(100)
```

`## [1] 2`

## Solving Equations with Exponentials

OK, so now that you have a basic understanding of exponentials, roots, and logarithms; let’s take a look at some equations that involve exponential calculations.

Let’s start with what might at first glance look like a complicated example, but don’t worry - we’ll solve it step-by-step and learn a few tricks along the way:

\(\begin{equation}2y = 2x^{4} ( \frac{x^{2} + 2x^{2}}{x^{3}} ) \end{equation}\)

First, let’s deal with the fraction on the right side. The numerator of this fraction is x^{2} + 2x^{2} - so we’re adding two exponential terms. When the terms you’re adding (or subtracting) have the same exponential, you can simply add (or subtract) the coefficients. In this case, x^{2} is the same as 1x^{2}, which when added to 2x^{2} gives us the result 3x^{2}, so our equation now looks like this:

\(\begin{equation}2y = 2x^{4} ( \frac{3x^{2}}{x^{3}} ) \end{equation}\)

Now that we’ve condolidated the numerator, let’s simplify the entire fraction by dividing the numerator by the denominator. When you divide exponential terms with the same variable, you simply divide the coefficients as you usually would and subtract the exponential of the denominator from the exponential of the numerator. In this case, we’re dividing 3x^{2} by 1x^{3}: The coefficient 3 divided by 1 is 3, and the exponential 2 minus 3 is -1, so the result is 3x^{-1}, making our equation:

\(\begin{equation}2y = 2x^{4} ( 3x^{-1} ) \end{equation}\)

So now we’ve got rid of the fraction on the right side, let’s deal with the remaining multiplication. We need to multiply 3x^{-1} by 2x^{4}. Multiplication, is the opposite of division, so this time we’ll multipy the coefficients and add the exponentials: 3 multiplied by 2 is 6, and -1 + 4 is 3, so the result is 6x^{3}:

\(\begin{equation}2y = 6x^{3} \end{equation}\)

We’re in the home stretch now, we just need to isolate y on the left side, and we can do that by dividing both sides by 2. Note that we’re not dividing by an exponential, we simply need to divide the whole 6x^{3} term by two; and half of 6 times x^{3} is just 3 times x^{3}:

\(\begin{equation}y = 3x^{3} \end{equation}\)

Now we have a solution that defines y in terms of x. We can use R to plot the line created by this equation for a set of arbitrary *x* and *y* values:

```
# Create a dataframe with an x column containing values from -10 to 10
df = data.frame(x = seq(-10, 10))
# Add a y column by applying the slope-intercept equation to x
df$y = 3*df$x**3
#Display the dataframe
print(df)
```

```
## x y
## 1 -10 -3000
## 2 -9 -2187
## 3 -8 -1536
## 4 -7 -1029
## 5 -6 -648
## 6 -5 -375
## 7 -4 -192
## 8 -3 -81
## 9 -2 -24
## 10 -1 -3
## 11 0 0
## 12 1 3
## 13 2 24
## 14 3 81
## 15 4 192
## 16 5 375
## 17 6 648
## 18 7 1029
## 19 8 1536
## 20 9 2187
## 21 10 3000
```

```
# Plot the line
library(ggplot2)
ggplot(df, aes(x,y)) +
geom_line(color = 'magenta', size = 1) +
geom_hline(yintercept=0) + geom_vline(xintercept=0)
```

Note that the line is curved. This is symptomatic of an exponential equation: as values on one axis increase or decrease, the values on the other axis scale *exponentially* rather than *linearly*.

Let’s look at an example in which x is the exponential, not the base:

\(\begin{equation}y = 2^{x} \end{equation}\)

We can still plot this as a line:

```
# Create a dataframe with an x column containing values from -10 to 10
df = data.frame(x = seq(-10, 10))
# Add a y column by applying the slope-intercept equation to x
df$y = 2.0**df$x
## Plot the line
ggplot(df, aes(x,y)) +
geom_line(color = 'magenta', size = 1) +
geom_hline(yintercept=0) + geom_vline(xintercept=0)
```

Note that when the exponential is a negative number, R reports the result as 0. Actually, it’s a very small fractional number, but because the base is positive the exponential number will always positive. Also, note the rate at which y increases as x increases - exponential growth can be be pretty dramatic.

So what’s the practical application of this?

Well, let’s suppose you deposit $100 in a bank account that earns 5% interest per year. What would the balance of the account be in twenty years, assuming you don’t deposit or withdraw any additional funds?

To work this out, you could calculate the balance for each year:

After the first year, the balance will be the initial deposit ($100) plus 5% of that amount:

\(\begin{equation}y1 = 100 + (100 \cdot 0.05) \end{equation}\)

Another way of saying this is:

\(\begin{equation}y1 = 100 \cdot 1.05 \end{equation}\)

At the end of year two, the balance will be the year one balance plus 5%:

\(\begin{equation}y2 = 100 \cdot 1.05 \cdot 1.05 \end{equation}\)

Note that the interest for year two, is the interest for year one multiplied by itself - in other words, squared. So another way of saying this is:

\(\begin{equation}y2 = 100 \cdot 1.05^{2} \end{equation}\)

It turns out, if we just use the year as the exponent, we can easily calculate the growth after twenty years like this:

\(\begin{equation}y20 = 100 \cdot 1.05^{20} \end{equation}\)

Let’s apply this logic in R to see how the account balance would grow over twenty years:

```
# Create a dataframe with an x column containing values from -10 to 10
df = data.frame(Year = seq(1, 20))
# Calculate the balance for each year based on the exponential growth from interest
df$Balance = 100 * (1.05**df$Year)
## Plot the line
ggplot(df, aes(Year, Balance)) +
geom_line(color = 'green', size = 1) +
geom_hline(yintercept=0) + geom_vline(xintercept=0)
```

# Polynomials

Some of the equations we’ve looked at so far include expressions that are actually *polynomials*; but what *is* a polynomial, and why should you care?

A polynomial is an algebraic expression containing one or more *terms* that each meet some specific criteria. Specifically: - Each term can contain: - Numeric values that are coefficients or constants (for example 2, -5, ^{1}/_{7}) - Variables (for example, x, y) - Non-negative integer exponents (for example ^{2}, ^{64}) - The terms can be combined using arithmetic operations - but **not** division by a variable.

For example, the following expression is a polynomial:

\[\begin{equation}12x^{3} + 2x - 16 \end{equation}\]When identifying the terms in a polynomial, it’s important to correctly interpret the arithmetic addition and subtraction operators as the sign for the term that follows. For example, the polynomial above contains the following three terms: - 12x^{3} - 2x - -16

The terms themselves include: - Two coefficients(12 and 2) and a constant (-16) - A variable (x) - An exponent (^{3})

A polynomial that contains three terms is also known as a *trinomial*. Similarly, a polynomial with two terms is known as a *binomial* and a polynomial with only one term is known as a *monomial*.

So why do we care? Well, polynomials have some useful properties that make them easy to work with. for example, if you multiply, add, or subtract a polynomial, the result is always another polynomial.

## Standard Form for Polynomials

Technically, you can write the terms of a polynomial in any order; but the *standard form* for a polynomial is to start with the highest *degree* first and constants last. The degree of a term is the highest order (exponent) in the term, and the highest order in a polynomial determines the degree of the polynomial itself.

For example, consider the following expression: \(\begin{equation}3x + 4xy^{2} - 3 + x^{3} \end{equation}\)

To express this as a polynomial in the standard form, we need to re-order the terms like this:

\(\begin{equation}x^{3} + 4xy^{2} + 3x - 3 \end{equation}\)

## Simplifying Polynomials

We saw previously how you can simplify an equation by combining *like terms*. You can simplify polynomials in the same way.

For example, look at the following polynomial:

\(\begin{equation}x^{3} + 2x^{3} - 3x - x + 8 - 3 \end{equation}\)

In this case, we can combine x^{3} and 2x^{3} by adding them to make 3x^{3}. Then we can add -3x and -x (which is really just a shorthand way to say -1x) to get -4x, and then add 8 and -3 to get 5. Our simplified polynomial then looks like this:

\(\begin{equation}3x^{3} - 4x + 5 \end{equation}\)

We can use R to compare the original and simplified polynomials to check them - using an arbitrary random value for ** x**:

```
x = sample.int(100, 1)
(x**3 + 2*x**3 - 3*x - x + 8 - 3) == (3*x**3 - 4*x + 5)
```

`## [1] TRUE`

## Adding Polynomials

When you add two polynomials, the result is a polynomial. Here’s an example:

\(\begin{equation}(3x^{3} - 4x + 5) + (2x^{3} + 3x^{2} - 2x + 2) \end{equation}\)

because this is an addition operation, you can simply add all of the like terms from both polynomials. To make this clear, let’s first put the like terms together:

\(\begin{equation}3x^{3} + 2x^{3} + 3x^{2} - 4x -2x + 5 + 2 \end{equation}\)

This simplifies to:

\(\begin{equation}5x^{3} + 3x^{2} - 6x + 7 \end{equation}\)

We can verify this with R:

```
x = sample.int(100, 1)
(3*x**3 - 4*x + 5) + (2*x**3 + 3*x**2 - 2*x + 2) == 5*x**3 + 3*x**2 - 6*x + 7
```

`## [1] TRUE`

## Subtracting Polynomials

Subtracting polynomials is similar to adding them but you need to take into account that one of the polynomials is a negative.

Consider this expression:

\(\begin{equation}(2x^{2} - 4x + 5) - (x^{2} - 2x + 2) \end{equation}\)

The key to performing this calculation is to realize that the subtraction of the second polynomial is really an expression that adds -1(x^{2} - 2x + 2); so you can use the distributive property to multiply each of the terms in the polynomial by -1 (which in effect simply reverses the sign for each term). So our expression becomes:

\(\begin{equation}(2x^{2} - 4x + 5) + (-x^{2} + 2x - 2) \end{equation}\)

Which we can solve as an addition problem. First place the like terms together:

\(\begin{equation}2x^{2} + -x^{2} + -4x + 2x + 5 + -2 \end{equation}\)

Which simplifies to:

\(\begin{equation}x^{2} - 2x + 3 \end{equation}\)

Let’s check that with R:

```
x = sample.int(100, 1)
(2*x**2 - 4*x + 5) - (x**2 - 2*x + 2) == x**2 - 2*x + 3
```

`## [1] TRUE`

## Multiplying Polynomials

To multiply two polynomials, you need to perform the following two steps: 1. Multiply each term in the first polynomial by each term in the second polynomial. 2. Add the results of the multiplication operations, combining like terms where possible.

For example, consider this expression:

\(\begin{equation}(x^{4} + 2)(2x^{2} + 3x - 3) \end{equation}\)

Let’s do the first step and multiply each term in the first polynomial by each term in the second polynomial. The first term in the first polynomial is x^{4}, and the first term in the second polynomial is 2x^{2}, so multiplying these gives us 2x^{6}. Then we can multiply the first term in the first polynomial (x^{4}) by the second term in the second polynomial (3x), which gives us 3x^{5}, and so on until we’ve multiplied all of the terms in the first polynomial by all of the terms in the second polynomial, which results in this:

\(\begin{equation}2x^{6} + 3x^{5} - 3x^{4} + 4x^{2} + 6x - 6 \end{equation}\)

We can verify a match between this result and the original expression this with the following R code:

```
x = sample.int(100, 1)
(x**4 + 2)*(2*x**2 + 3*x - 3) == 2*x**6 + 3*x**5 - 3*x**4 + 4*x**2 + 6*x - 6
```

`## [1] TRUE`

## Dividing Polynomials

When you need to divide one polynomial by another, there are two approaches you can take depending on the number of terms in the divisor (the expression you’re dividing by).

### Dividing Polynomials Using Simplification

In the simplest case, division of a polynomial by a monomial, the operation is really just simplification of a fraction.

For example, consider the following expression:

\(\begin{equation}(4x + 6x^{2}) \div 2x \end{equation}\)

This can also be written as:

\(\begin{equation}\frac{4x + 6x^{2}}{2x} \end{equation}\)

One approach to simplifying this fraction is to split it it into a separate fraction for each term in the dividend (the expression we’re dividing), like this:

\(\begin{equation}\frac{4x}{2x} + \frac{6x^{2}}{2x}\end{equation}\)

Then we can simplify each fraction and add the results. For the first fraction, 2x goes into 4x twice, so the fraction simplifies to 2; and for the second, 6x^{2} is 2x multiplied by 3x. So our answer is 2 + 3x:

\(\begin{equation}2 + 3x\end{equation}\)

Let’s use R to compare the original fraction with the simplified result for an arbitrary value of ** x**:

```
x = sample.int(100, 1)
(4*x + 6*x**2) / (2*x) == 2 + 3*x
```

`## [1] TRUE`

### Dividing Polynomials Using Long Division

Things get a little more complicated for divisors with more than one term.

Suppose we have the following expression: \(\begin{equation}(x^{2} + 2x - 3) \div (x - 2) \end{equation}\)

Another way of writing this is to use the long-division format, like this: \(\begin{equation} x - 2 |\overline{x^{2} + 2x - 3} \end{equation}\)

We begin long-division by dividing the highest order divisor into the highest order dividend - so in this case we divide x into x^{2}. X goes into x^{2} x times, so we put an x on top and then multiply it through the divisor: \(\begin{equation} \;\;\;\;x \end{equation}\) \(\begin{equation}x - 2 |\overline{x^{2} + 2x - 3} \end{equation}\) \(\begin{equation} \;x^{2} -2x \end{equation}\)

Now we’ll subtract the remaining dividend, and then carry down the -3 that we haven’t used to see what’s left: \(\begin{equation} \;\;\;\;x \end{equation}\) \(\begin{equation}x - 2 |\overline{x^{2} + 2x - 3} \end{equation}\) \(\begin{equation}- (x^{2} -2x) \end{equation}\) \(\begin{equation}\;\;\;\;\;\overline{\;\;\;\;\;\;\;\;\;\;4x -3} \end{equation}\)

OK, now we’ll divide our highest order divisor into the highest order of the remaining dividend. In this case, x goes into 4x four times, so we’ll add a 4 to the top line, multiply it through the divisor, and subtract the remaining dividend: \(\begin{equation} \;\;\;\;\;\;\;\;x + 4 \end{equation}\) \(\begin{equation}x - 2 |\overline{x^{2} + 2x - 3} \end{equation}\) \(\begin{equation}- (x^{2} -2x) \end{equation}\) \(\begin{equation}\;\;\;\;\;\overline{\;\;\;\;\;\;\;\;\;\;4x -3} \end{equation}\) \(\begin{equation}- (\;\;\;\;\;\;\;\;\;\;\;\;4x -8) \end{equation}\) \(\begin{equation}\;\;\;\;\;\overline{\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;5} \end{equation}\)

We’re now left with just 5, which we can’t divide further by x - 2; so that’s our remainder, which we’ll add as a fraction.

The solution to our division problem is:

\(\begin{equation}x + 4 + \frac{5}{x-2} \end{equation}\)

Once again, we can use R to check our answer:

```
x = sample.int(100, 1)
(x**2 + 2*x -3)/(x-2) == x + 4 + (5/(x-2))
```

`## [1] TRUE`

`sessionInfo()`

```
## R version 3.5.1 (2018-07-02)
## Platform: x86_64-pc-linux-gnu (64-bit)
## Running under: Ubuntu 18.04.1 LTS
##
## Matrix products: default
## BLAS: /home/michael/anaconda3/lib/R/lib/libRblas.so
## LAPACK: /home/michael/anaconda3/lib/R/lib/libRlapack.so
##
## locale:
## [1] en_CA.UTF-8
##
## attached base packages:
## [1] stats graphics grDevices utils datasets methods base
##
## other attached packages:
## [1] repr_0.15.0 ggplot2_3.0.0 RevoUtils_11.0.1
## [4] RevoUtilsMath_11.0.0
##
## loaded via a namespace (and not attached):
## [1] Rcpp_0.12.18 compiler_3.5.1 pillar_1.3.0 plyr_1.8.4
## [5] bindr_0.1.1 base64enc_0.1-3 tools_3.5.1 digest_0.6.15
## [9] evaluate_0.11 tibble_1.4.2 gtable_0.2.0 pkgconfig_2.0.1
## [13] rlang_0.2.1 yaml_2.2.0 blogdown_0.9.8 xfun_0.4.11
## [17] bindrcpp_0.2.2 withr_2.1.2 stringr_1.3.1 dplyr_0.7.6
## [21] knitr_1.20 rprojroot_1.3-2 grid_3.5.1 tidyselect_0.2.4
## [25] glue_1.3.0 R6_2.2.2 rmarkdown_1.10 bookdown_0.7
## [29] purrr_0.2.5 magrittr_1.5 backports_1.1.2 scales_0.5.0
## [33] htmltools_0.3.6 assertthat_0.2.0 colorspace_1.3-2 labeling_0.3
## [37] stringi_1.2.4 lazyeval_0.2.1 munsell_0.5.0 crayon_1.3.4
```

# References

`knitr::write_bib(.packages(), "packages.bib")`

R Core Team. 2018. *R: A Language and Environment for Statistical Computing*. Vienna, Austria: R Foundation for Statistical Computing. https://www.R-project.org/.

Wickham, Hadley, Winston Chang, Lionel Henry, Thomas Lin Pedersen, Kohske Takahashi, Claus Wilke, and Kara Woo. 2018. *Ggplot2: Create Elegant Data Visualisations Using the Grammar of Graphics*. https://CRAN.R-project.org/package=ggplot2.