Use the three formulas above to answer the following questions.
What is the probability of drawing a \(7\) from a standard deck of \(52\) cards?
What is the probability of rolling an odd number on a standard \(6\)-sided die (‘die’ is singular for ‘dice’)?
If the event \(A\) occurs \(20\) times and has a probability of \(0.05\), how many total outcomes are there?
If the event \(B\) has a probability of \(0.3\) and there are a total of \(350\) outcomes, how many times does \(B\) occur?
The probability of drawing a \(7\) is \(\frac{4}{52} = \frac{1}{13}\)
The probability of rolling an odd number is \(\frac{3}{6} = \frac{1}{2} = 0.5\)
Since \(\vert A \vert = 20\) and \(P(A) = 0.05\), we have that \(S = \frac{\vert A \vert}{P(A)} = \frac{20}{0.05} = 400\)
Since \(P(B) = 0.3\) and \(S = 350\), we have that \(\vert B \vert = S \times P(B) = 350 \times 0.3 = 105\)
Use DeMorgan’s Laws to solve the following questions.
If the probability of the intersection of \(A\) and \(B\) is \(0.3\), what is the probability of the union of the complement of \(A\) and the complement of \(B\)?
If the probability of the intersection of the complement of \(A\) and the complement of \(B\) is \(0.55\), what is the probability of the union of \(A\) and \(B\)?
We can rewrite this question as: if \(P(A \cap B) = 0.3\), what is \(P(\overline{A} \cup \overline{B})\)? We can then use one of the complement formulas to find \(P(\overline{A \cap B})\), which gives \(P(\overline{A \cap B}) = 1 - P(A \cap B) = 1 - 0.3 = 0.7\). Now, we can use one of DeMorgan’s Laws to get \(P(\overline{A} \cup \overline{B}) = P(\overline{A \cap B}) = 0.7\).
We can rewrite this question as: if \(P(\overline{A} \cap \overline{B}) = 0.55\), what is \(P(A \cup B)\)? We can then use one of DeMorgan’s Laws to get \(P(\overline{A \cup B}) = P(\overline{A} \cap \overline{B}) = 0.55\). Finally, we use of the complement formulas to get \(P(A \cup B) = 1 - P(\overline{A \cup B}) = 1 - 0.55 = 0.45\).
Determine whether the following events are disjoint or not. If the events are disjoint, give the probability of \(A \cup B\).
A coin is flipped with the events: \(A\) = Heads, and \(B\) = Tails.
A coin is flipped twice with the events: \(A\) = first flips is Heads, and \(B\) = second flip is Tails.
A card is drawn from a standard deck of \(52\) cards with the events: \(A\) = card is a \(10\), and \(B\) = card is a face-card.
It is impossible for a coin flip to be both Heads and Tails at the same time, so the events \(A\) and \(B\) are disjoint. Then, we have \(S = 2\), with \(\vert A \vert = 1\) and \(\vert B \vert = 1\). This gives us \(P(A) = \frac{\vert A \vert}{S} = \frac{1}{2} = 0.5\) and \(P(B) = \frac{\vert B \vert}{S} = \frac{1}{2} = 0.5\). Now, since the events are disjoint, we have
\[P(A \cup B) = P(A) + P(B) = 0.5 + 0.5 = 1\]
which makes sense because \(A \cup B\) contains all possible outcomes.
These events are not disjoint because it is possible to get Heads on the first coin flip and then get Tails on the second coin flip.
It is impossible for a card to be both a 10 and a face-card, so the events \(A\) and \(B\) are disjoint. Then, we have \(S = 52\), with \(\vert A \vert = 4\) and \(\vert B \vert = 12\). This gives us \(P(A) = \frac{\vert A \vert}{S} = \frac{4}{52}\) and \(P(B) = \frac{\vert B \vert}{S} = \frac{12}{52}\). Now, since the events are disjoint, we have
\[\begin{aligned} P(A \cup B) & = P(A) + P(B) \\ & = \frac{4}{52} + \frac{12}{52} \\ & = \frac{4 + 12}{52} \\ & = \frac{16}{52} \\ & = \frac{4}{13}\end{aligned}\]
Determine if the following events are independent or dependent. If they are independent then solve for the probability of the intersection, \(A \cap B\).
When flipping a coin, \(A\) = first flip is Heads, and \(B\) = second flip is Tails.
When flipping a coin, \(A\) = first flip is Heads, and \(B\) = second flip is Heads.
When rolling a 6-sided die, \(A\) = value is even, and \(B\) = value is \(2\).
In a student council election, there are \(10\) candidates, with \(4\) of them in grade \(7\) and \(6\) of them in grade \(8\). There are two positions available: President and Treasurer; with \(A\) = the President is in grade \(7\), and \(B\) = the Treasurer is in grade \(8\).
The outcomes of different coin flips do not affect one another, so the events \(A\) and \(B\) are independent. Thus, \(P(A \cap B) = P(A) \times P(B) = 0.5 \times 0.5 = 0.25\).
The outcomes of different coin flips do not affect one another, so the events \(A\) and \(B\) are independent. Thus, \(P(A \cap B) = P(A) \times P(B) = 0.5 \times 0.5 = 0.25\)
Since the events are for the same dice-roll, instead of different dice-rolls, the event \(A\) will certainly affect \(B\), and vice versa, so the events are dependent.
The outcome of either position will result in \(1\) less person being eligible for the other position, which will affect the probability. Thus, the events \(A\) and \(B\) are dependent.
Suppose we flip a coin three times.
What is the probability of getting Heads on the first coin flip?
What is the probability of getting Tails on the third coin flip?
What is the probability of not getting Tails on the second coin flip?
What is the probability of getting Heads on all three coin flips?
Solution:
The probability of getting Heads for any coin flip is \(\frac{1}{2} = 0.5\).
The probability of getting Tails for any coin flip is \(\frac{1}{2} = 0.5\).
The probability of not getting Tails for any coin flip is the same as getting Heads for any coin flip, which is \(\frac{1}{2} = 0.5\).
We can define the events \(A\) = first coin flip is Heads, \(B\) = second coin flip is Heads, and \(C\) = third coin flip is Heads. Using the results from parts (a) - (c), we have that probability of each event is 0.5. Since the result of separate coin flips do not affect one another, these events are all independent. Then, the probability of all three coin flips are Heads is the same as \(P(A \cap B \cap C)\). Since the events are all independent, we have
\[\begin{aligned} P(A \cap B \cap C) &= P(A) \times P(B) \times P(C) \\ & = 0.5 \times 0.5 \times 0.5 \\ & = 0.125\end{aligned}\]
A sack contains marbles of different colours. There are \(25\) marbles in total, and the probability of drawing a red marble from the sack is \(0.44\).
How many red marbles are in the sack?
How many marbles in the sack are not red?
What is the probability of drawing a marble that is not red?
Solution:
We define the event \(A\) = the marble drawn is red. From the question, we have that \(S = 25\) and \(P(A) = 0.44\). So, using one of the formulas from the lesson gives:
\[\vert A \vert = S \times P(A) = 25 \times 0.44 = 11\]
Thus, there are \(11\) red marbles in the sack.
We have that \(\overline{A}\) = the marble drawn is not red, so we wish to find \(\vert \overline{A} \vert\). From part (a), we have that \(S = 25\) and \(\vert A \vert = 11\), so we can use one of the complement formulas to get:
\[\vert \overline{A} \vert = S - \vert A \vert = 25 - 11 = 14\]
Thus, there are 14 marbles in the sack that are not red.
There are two ways we can solve this problem.
(1) From part (b), we have that \(S = 25\) and \(\vert \overline{A} \vert = 14\). Using one of the formulas from the lesson gives:
\[P(\overline{A}) = \frac{\vert \overline{A} \vert}{S} = \frac{14}{25} = 0.56\]
Thus, the probability of drawing a marble that is not red is \(0.56\).
(2) Another way we could have solved this is using one of the formulas for the probability of the complement of an event. Since \(P(A) = 0.44\), we get:
\[P(\overline{A}) = 1 - P(A) = 1 - 0.44 = 0.56\]
Thus, the probability of drawing a marble that is not red is \(0.56\).
An aquarium contains two kinds of fish: clownfish and pufferfish. Let the event \(A\) = the fish is a pufferfish, with \(\vert A \vert = 81\) and \(P(A) = 0.54\).
How many fish are there in total?
What is \(\overline{A}\) and \(P(\overline{A})\)?
Use any method to determine \(\vert \overline{A} \vert\).
Solution:
We wish to find \(S\). Using one of the formulas from the lesson, we have:
\[S = \frac{\vert A \vert}{P(A)} = \frac{81}{0.54} = 150\]
Thus, there are 150 fish in total in the aquarium.
Since there are only two kinds of fish in the aquarium, instead of just saying \(\overline{A}\) = the fish is not a pufferfish, we can say \(\overline{A}\) = the fish is a clownfish. Then, we get
\[P(\overline{A}) = 1 - P(A) = 1 - 0.54 = 0.46\]
There are two ways we can solve this problem.
(1) From parts (a) and (b), we have that \(S = 150\) and \(P(\overline{A}) = 0.46\). Then, we get:
\[\vert \overline{A} \vert = S \times P(\overline{A}) = 150 \times 0.46 = 69\]
(2) From the question and part (a), we have \(\vert A \vert = 81\) and \(S = 150\). Then, we get:
\[\vert \overline{A} \vert = S - \vert A \vert = 150 - 81 = 69\]
Use the formulas for the relationship between the probabilities of the union and intersection of events to answer the following.
Determine \(P(A)\), for \(P(A \cap B) = 0.25\), \(P(A \cup B) = 0.52\) and \(P(B) = 0.4\).
Determine \(P(C \cup D)\), for \(P(D) = 0.13\), \(P(C) = 0.26\) and \(P(C \cap D) = 0.09\).
Determine \(P(E \cap F)\), for \(P(E) = 0.38\), \(P(E \cup F) = 0.77\) and \(P(F) = 0.39\).
Solution:
\(P(A) = P(A \cup B) + P(A \cap B) - P(B) = 0.52 + 0.25 - 0.4 = 0.37\)
\(P(C \cup D) = P(C) + P(D) - P(C \cap D) = 0.26 + 0.13 - 0.09 = 0.3\)
\(P(E \cap F) = P(E) + P(F) - P(E \cap F) = 0.38 + 0.39 - 0.77 = 0\)
TRUE or FALSE: Is it possible to have the following probabilities? Justify your answer.
\(P(A) = 0.43\)
\(P(B) = 0.17\)
\(P(A \cup B) = 0.41\)
\(P(A \cap B) = 0.19\)
Solution: No, it is not possible to have these probabilities. We must have that
\(P(A \cap B) \leq P(A) \leq P(A \cup B)\) and \(P(A \cap B) \leq P(B) \leq P(A \cup B)\)
but this is not true here since \(P(A \cap B) > P(B)\) and \(P(A \cup B) < P(A)\).
At an exclusive social event, each guest is given a wristband that is either blue, green, orange or yellow. Of the \(500\) guests that attend the event, \(127\) have blue wristbands, \(98\) have green wristbands, \(143\) have orange wristbands, and \(132\) have yellow wristbands. For a randomly selected person at the event, we have the events \(B\) = the person has a blue wristband, \(G\) = the person has a green wristband, \(O\) = the person has an orange wristband, and \(Y\) = the person has a yellow wristband.
Determine the probability of each of the events.
Are the events is disjoint or not? Justify your answer.
What is the probability that a randomly selected person has either a blue wristband or a yellow wristband?
What is the probability that a randomly selected person doesn’t have a blue wristband and doesn’t have a yellow wristband?
Solution:
From the question, we have \(S = 500\), \(\vert B \vert = 127\), \(\vert G \vert = 98\), \(\vert O \vert = 143\) and \(\vert Y \vert = 132\). This gives the probability of each event to be: \[\begin{aligned} P(B) &= \frac{\vert B \vert}{S} = \frac{127}{500} = 0.254\\ P(G) &= \frac{\vert G \vert}{S} = \frac{98}{500} = 0.196\\ P(O) &= \frac{\vert O \vert}{S} = \frac{143}{500} = 0.286\\ P(Y) &= \frac{\vert Y \vert}{S} = \frac{132}{500} = 0.264\end{aligned}\]
The events \(B\), \(G\), \(O\) and \(Y\) are all disjoint because it is impossible for any of them to occur at the same time, (i.e. a person can’t have both a blue and green wristband).
We wish to find \(P(B \cup Y)\). Since the events are disjoint, we have that:
\[P(B \cup Y) = P(B) + P(Y) = 0.254 + 0.264 = 0.518\]
Thus, there is a 0.518 probability that a randomly selected person has either a blue wristband or a yellow wristband.
We wish to find \(P(\overline{B} \cap \overline{Y})\). There are two ways to solve this problem.
(1) The first way to solve this is by using the result from part (c) and DeMorgan’s Laws. From part (c), we have that \(P(B \cup Y) = 0.518\). We then find the complement of this to be:
\[P(\overline{B \cup Y}) = 1 - P(B \cup Y) = 1 - 0.518 = 0.482\]
We then use one of DeMorgan’s Laws to get:
\[P(\overline{B} \cap \overline{Y}) = P(\overline{B \cup Y}) = 0.482\]
Thus, there is a \(0.482\) probability that a randomly selected person doesn’t have a blue wristband and doesn’t have a yellow wristband.
(2) Another way to solve this is by realizing that a person not having a blue wristband and not having yellow wristband just means that they have either a green wristband or orange wristband. So,
\[P(\overline{B} \cap \overline{Y}) = P(G \cup O)\]
Then, since the events are all disjoint, we have that:
\[P(G \cup O) = P(G) + P(Y) = 0.196 + 0.286 = 0.482\]
Thus, there is a \(0.482\) probability that a randomly selected person doesn’t have a blue wristband and doesn’t have a yellow wristband.
In a candy jar, the candy is categorized by the following attributes that have no influence on one another: the candy is either sweet or sour; and the candy is either hard or soft. The events are \(A\) = the candy is soft, and \(B\) = the candy is sweet. There are a total of 1875 pieces of candy in the jar, with \(P(A) = 0.36\) and \(P(\overline{B}) = 0.52\).
Determine \(S\), \(\vert A \vert\), \(\vert \overline{A} \vert\), \(\vert B \vert\) and \(\vert \overline{B} \vert\).
Are the events \(A\) and \(B\) independent or dependent? Explain.
Determine \(P(A \cap B)\), \(P(A \cap \overline{B})\), \(P(\overline{A} \cap B)\) and \(P(\overline{A} \cap \overline{B})\).
What is the sum of the probabilities from part (c)? Why is this the case?
Solution:
\(S = 1875\)
\(\vert A \vert = S \times P(A) = 1875 \times 0.36 = 675\)
\(\vert \overline{A} \vert = S - \vert A \vert = 1875 - 675 = 1200\)
\(\vert \overline{B} \vert = S \times P(\overline{B}) = 1875 \times 0.52 = 975\)
\(\vert B \vert = S - \vert \overline{B} \vert = 1875 - 975 = 900\)
It is explicitly stated in the question that the attributes have no influence on one another. That is, whether a candy is sweet or sour doesn’t affect if the candy is hard or soft. So, the events \(A\) and \(B\) are independent.
Since the events are all independent, we have:
\[\begin{aligned} P(A \cap B) &= P(A) \times P(B) \\ & = 0.36 \times [1 - P(\overline{B})] \\ & = 0.36 \times [1 - 0.52] \\ & = 0.36 \times 0.48 = 0.1728\end{aligned}\]
\[\begin{aligned} P(A \cap \overline{B}) &= P(A) \times P(\overline{B}) \\ &= 0.36 \times 0.52 \\ &= 0.1872\end{aligned}\]
\[\begin{aligned} P(\overline{A} \cap B) &= P(\overline{A}) \times P(B) \\ &= [1 - P(A)] \times [1 - P(\overline{B})] \\ &= [1 - 0.36] \times [1 - 0.52] \\ &= 0.64 \times 0.48 \\ &= 0.3072\end{aligned}\]
\[\begin{aligned} P(\overline{A} \cap \overline{B}) &= P(\overline{A}) \times P(\overline{B}) \\ &= [1 - P(A)] \times 0.52 \\ &= [1 - 0.36] \times 0.52 \\ &= 0.64 \times 0.52 \\ &= 0.3328\end{aligned}\]
\[P(A \cap B) + P(A \cap \overline{B}) + P(\overline{A} \cap B) + P(\overline{A} \cap \overline{B}) = 0.1728 + 01872 + 0.3072 + 0.3328 = 1\]
The sum of these probabilities is 1 because these are all the possible combinations we can have for the candy; soft and sweet, soft and sour, hard and sweet, or hard and sour.
Is it possible for two events \(A\) and \(B\) with non-zero probabilities to be both disjoint and independent? Explain. If it is possible then provide an example.
Solution: Let us assume that two events \(A\) and \(B\) with non-zero probabilities are both disjoint and independent. Because they are independent, by definition, the events have no effect on each other whatsoever. Also, because they are disjoint, by definition, it is impossible for both \(A\) and \(B\) to occur at the same time. That means if \(A\) occurs, then \(B\) does not occur, and if \(B\) occurs, then \(A\) does not occur. But that means that the events \(A\) and \(B\) do affect one another, which contradicts our assumption that \(A\) and \(B\) are independent. Thus, it is impossible for any events \(A\) and \(B\) with non-zero probabilities to be both disjoint and independent.
Note: that if either \(P(A) = 0\) or \(P(B) = 0\), then it is possible for \(A\) and \(B\) to be disjoint and independent.
Suppose you are on a game show and are presented with \(10\) doors to pick from, where you win whatever is behind the door you pick. Behind one of these doors is a cash prize and behind the rest of the doors is nothing. Each door has the same probability of having the money behind it. Before you are able to pick a door, \(6\) of the doors that have nothing behind them are removed. You then pick one of the remaining doors. Once you’ve made your choice, even more doors with nothing behind them are removed until there are only \(2\) left: the door you picked, and one other door. You are then given the option of staying with the door you picked, or swapping it with the other door. What is the probability of winning the cash prize if you stay with your door? What is the probability of winning the cash prize if you swap doors? Show your work.
Solution: Initially, there are \(10\) doors to choose from, with each of them having a \(\frac{1}{10} = 0.1\) or \(10\)% probability of having the cash prize behind them. Then \(6\) of the doors are removed before a choice can be made, meaning that there are \(10 - 6 = 4\) doors left to pick from. Since the 6 doors were removed before a choice was made, the remaining 4 doors still all have an equal chance of having the money behind them. Specifically, the probability for each door is \(\frac{1}{4} = 0.25\) or \(25\)%.
Let us label the doors as \(A\), \(B\), \(C\) and \(D\), and suppose we pick door \(A\). We know that the probability of the money being behind door \(A\) is \(P(A) = 0.25\), and similarly, \(P(B) = 0.25\), \(P(C) = 0.25\) and \(P(D) = 0.25\). This means that the probability of the money not being behind door \(A\) is \(P(\overline{A}) = 1 - P(A) = 1 - 0.25 = 0.75\).
Now, even more doors with nothing behind them are removed until only \(2\) doors remain: the one we picked, and one other. Let us assume these are door \(A\) and door \(B\), which means that door \(C\) and door \(D\) were removed. We are now asked if we wish to stay with door \(A\), or swap to door \(B\). Even though there are only \(2\) doors left, they don’t have the same probability of having the money behind them. This is because we picked door \(A\) when there were \(4\) doors, so even though there are 2 doors now, we still have \(P(A) = 0.25\) and \(P(\overline{A}) = 0.75\). We know that \(\overline{A}\) = the money is not behind door \(A\) and since door \(B\) is the only door remaining, we must have that \(\overline{A} = B\) now. Thus, we have the new probability of door \(B\) to be \(P(B) = P(\overline{A}) = 0.75\). So, staying with door \(A\) gives us a \(0.25\) chance of winning the money, and swapping to door \(B\) gives us a \(0.75\) chance of winning the money.