Suppose we flip a coin three times.
What is the probability of getting Heads on the first coin flip?
What is the probability of getting Tails on the third coin flip?
What is the probability of not getting Tails on the second coin flip?
What is the probability of getting Heads on all three coin flips?
The probability of getting Heads for any coin flip is \(\frac{1}{2} = 0.5\).
The probability of getting Tails for any coin flip is \(\frac{1}{2} = 0.5\).
The probability of not getting Tails for any coin flip is the same as getting Heads for any coin flip, which is \(\frac{1}{2} = 0.5\).
We can define the events \(A\) = first coin flip is Heads, \(B\) = second coin flip is Heads, and \(C\) = third coin flip is Heads. Using the results from parts (a) - (c), we have that probability of each event is 0.5. Since the result of separate coin flips do not affect one another, these events are all independent. Then, the probability of all three coin flips are Heads is the same as \(P(A \cap B \cap C)\). Since the events are all independent, we have
\[\begin{aligned} P(A \cap B \cap C) &= P(A) \times P(B) \times P(C) \\ & = 0.5 \times 0.5 \times 0.5 \\ & = 0.125\end{aligned}\]
A sack contains marbles of different colours. There are \(25\) marbles in total, and the probability of drawing a red marble from the sack is \(0.44\).
How many red marbles are in the sack?
How many marbles in the sack are not red?
What is the probability of drawing a marble that is not red?
We define the event \(A\) = the marble drawn is red. From the question, we have that \(S = 25\) and \(P(A) = 0.44\). So, using one of the formulas from the lesson gives:
\[\vert A \vert = S \times P(A) = 25 \times 0.44 = 11\]
Thus, there are \(11\) red marbles in the sack.
We have that \(\overline{A}\) = the marble drawn is not red, so we wish to find \(\vert \overline{A} \vert\). From part (a), we have that \(S = 25\) and \(\vert A \vert = 11\), so we can use one of the complement formulas to get:
\[\vert \overline{A} \vert = S - \vert A \vert = 25 - 11 = 14\]
Thus, there are 14 marbles in the sack that are not red.
There are two ways we can solve this problem.
(1) From part (b), we have that \(S = 25\) and \(\vert \overline{A} \vert = 14\). Using one of the formulas from the lesson gives:
\[P(\overline{A}) = \frac{\vert \overline{A} \vert}{S} = \frac{14}{25} = 0.56\]
Thus, the probability of drawing a marble that is not red is \(0.56\).
(2) Another way we could have solved this is using one of the formulas for the probability of the complement of an event. Since \(P(A) = 0.44\), we get:
\[P(\overline{A}) = 1 - P(A) = 1 - 0.44 = 0.56\]
Thus, the probability of drawing a marble that is not red is \(0.56\).
An aquarium contains two kinds of fish: clownfish and pufferfish. Let the event \(A\) = the fish is a pufferfish, with \(\vert A \vert = 81\) and \(P(A) = 0.54\).
How many fish are there in total?
What is \(\overline{A}\) and \(P(\overline{A})\)?
Use any method to determine \(\vert \overline{A} \vert\).
We wish to find \(S\). Using one of the formulas from the lesson, we have:
\[S = \frac{\vert A \vert}{P(A)} = \frac{81}{0.54} = 150\]
Thus, there are 150 fish in total in the aquarium.
Since there are only two kinds of fish in the aquarium, instead of just saying \(\overline{A}\) = the fish is not a pufferfish, we can say \(\overline{A}\) = the fish is a clownfish. Then, we get
\[P(\overline{A}) = 1 - P(A) = 1 - 0.54 = 0.46\]
There are two ways we can solve this problem.
(1) From parts (a) and (b), we have that \(S = 150\) and \(P(\overline{A}) = 0.46\). Then, we get:
\[\vert \overline{A} \vert = S \times P(\overline{A}) = 150 \times 0.46 = 69\]
(2) From the question and part (a), we have \(\vert A \vert = 81\) and \(S = 150\). Then, we get:
\[\vert \overline{A} \vert = S - \vert A \vert = 150 - 81 = 69\]
Use the formulas for the relationship between the probabilities of the union and intersection of events to answer the following.
Determine \(P(A)\), for \(P(A \cap B) = 0.25\), \(P(A \cup B) = 0.52\) and \(P(B) = 0.4\).
Determine \(P(C \cup D)\), for \(P(D) = 0.13\), \(P(C) = 0.26\) and \(P(C \cap D) = 0.09\).
Determine \(P(E \cap F)\), for \(P(E) = 0.38\), \(P(E \cup F) = 0.77\) and \(P(F) = 0.39\).
\(P(A) = P(A \cup B) + P(A \cap B) - P(B) = 0.52 + 0.25 - 0.4 = 0.37\)
\(P(C \cup D) = P(C) + P(D) - P(C \cap D) = 0.26 + 0.13 - 0.09 = 0.3\)
\(P(E \cap F) = P(E) + P(F) - P(E \cap F) = 0.38 + 0.39 - 0.77 = 0\)
TRUE or FALSE: Is it possible to have the following probabilities? Justify your answer.
\(P(A) = 0.43\)
\(P(B) = 0.17\)
\(P(A \cup B) = 0.41\)
\(P(A \cap B) = 0.19\)
No, it is not possible to have these probabilities. We must have that
\(P(A \cap B) \leq P(A) \leq P(A \cup B)\) and \(P(A \cap B) \leq P(B) \leq P(A \cup B)\)
but this is not true here since \(P(A \cap B) > P(B)\) and \(P(A \cup B) < P(A)\).
At an exclusive social event, each guest is given a wristband that is either blue, green, orange or yellow. Of the \(500\) guests that attend the event, \(127\) have blue wristbands, \(98\) have green wristbands, \(143\) have orange wristbands, and \(132\) have yellow wristbands. For a randomly selected person at the event, we have the events \(B\) = the person has a blue wristband, \(G\) = the person has a green wristband, \(O\) = the person has an orange wristband, and \(Y\) = the person has a yellow wristband.
Determine the probability of each of the events.
Are the events is disjoint or not? Justify your answer.
What is the probability that a randomly selected person has either a blue wristband or a yellow wristband?
What is the probability that a randomly selected person doesn’t have a blue wristband and doesn’t have a yellow wristband?
From the question, we have \(S = 500\), \(\vert B \vert = 127\), \(\vert G \vert = 98\), \(\vert O \vert = 143\) and \(\vert Y \vert = 132\). This gives the probability of each event to be: \[\begin{aligned} P(B) &= \frac{\vert B \vert}{S} = \frac{127}{500} = 0.254\\ P(G) &= \frac{\vert G \vert}{S} = \frac{98}{500} = 0.196\\ P(O) &= \frac{\vert O \vert}{S} = \frac{143}{500} = 0.286\\ P(Y) &= \frac{\vert Y \vert}{S} = \frac{132}{500} = 0.264\end{aligned}\]
The events \(B\), \(G\), \(O\) and \(Y\) are all disjoint because it is impossible for any of them to occur at the same time, (i.e. a person can’t have both a blue and green wristband).
We wish to find \(P(B \cup Y)\). Since the events are disjoint, we have that:
\[P(B \cup Y) = P(B) + P(Y) = 0.254 + 0.264 = 0.518\]
Thus, there is a 0.518 probability that a randomly selected person has either a blue wristband or a yellow wristband.
We wish to find \(P(\overline{B} \cap \overline{Y})\). There are two ways to solve this problem.
(1) The first way to solve this is by using the result from part (c) and DeMorgan’s Laws. From part (c), we have that \(P(B \cup Y) = 0.518\). We then find the complement of this to be:
\[P(\overline{B \cup Y}) = 1 - P(B \cup Y) = 1 - 0.518 = 0.482\]
We then use one of DeMorgan’s Laws to get:
\[P(\overline{B} \cap \overline{Y}) = P(\overline{B \cup Y}) = 0.482\]
Thus, there is a \(0.482\) probability that a randomly selected person doesn’t have a blue wristband and doesn’t have a yellow wristband.
(2) Another way to solve this is by realizing that a person not having a blue wristband and not having yellow wristband just means that they have either a green wristband or orange wristband. So,
\[P(\overline{B} \cap \overline{Y}) = P(G \cup O)\]
Then, since the events are all disjoint, we have that:
\[P(G \cup O) = P(G) + P(Y) = 0.196 + 0.286 = 0.482\]
Thus, there is a \(0.482\) probability that a randomly selected person doesn’t have a blue wristband and doesn’t have a yellow wristband.
In a candy jar, the candy is categorized by the following attributes that have no influence on one another: the candy is either sweet or sour; and the candy is either hard or soft. The events are \(A\) = the candy is soft, and \(B\) = the candy is sweet. There are a total of 1875 pieces of candy in the jar, with \(P(A) = 0.36\) and \(P(\overline{B}) = 0.52\).
Determine \(S\), \(\vert A \vert\), \(\vert \overline{A} \vert\), \(\vert B \vert\) and \(\vert \overline{B} \vert\).
Are the events \(A\) and \(B\) independent or dependent? Explain.
Determine \(P(A \cap B)\), \(P(A \cap \overline{B})\), \(P(\overline{A} \cap B)\) and \(P(\overline{A} \cap \overline{B})\).
What is the sum of the probabilities from part (c)? Why is this the case?
\(S = 1875\)
\(\vert A \vert = S \times P(A) = 1875 \times 0.36 = 675\)
\(\vert \overline{A} \vert = S - \vert A \vert = 1875 - 675 = 1200\)
\(\vert \overline{B} \vert = S \times P(\overline{B}) = 1875 \times 0.52 = 975\)
\(\vert B \vert = S - \vert \overline{B} \vert = 1875 - 975 = 900\)
It is explicitly stated in the question that the attributes have no influence on one another. That is, whether a candy is sweet or sour doesn’t affect if the candy is hard or soft. So, the events \(A\) and \(B\) are independent.
Since the events are all independent, we have:
\[\begin{aligned} P(A \cap B) &= P(A) \times P(B) \\ & = 0.36 \times [1 - P(\overline{B})] \\ & = 0.36 \times [1 - 0.52] \\ & = 0.36 \times 0.48 = 0.1728\end{aligned}\]
\[\begin{aligned} P(A \cap \overline{B}) &= P(A) \times P(\overline{B}) \\ &= 0.36 \times 0.52 \\ &= 0.1872\end{aligned}\]
\[\begin{aligned} P(\overline{A} \cap B) &= P(\overline{A}) \times P(B) \\ &= [1 - P(A)] \times [1 - P(\overline{B})] \\ &= [1 - 0.36] \times [1 - 0.52] \\ &= 0.64 \times 0.48 \\ &= 0.3072\end{aligned}\]
\[\begin{aligned} P(\overline{A} \cap \overline{B}) &= P(\overline{A}) \times P(\overline{B}) \\ &= [1 - P(A)] \times 0.52 \\ &= [1 - 0.36] \times 0.52 \\ &= 0.64 \times 0.52 \\ &= 0.3328\end{aligned}\]
\[P(A \cap B) + P(A \cap \overline{B}) + P(\overline{A} \cap B) + P(\overline{A} \cap \overline{B}) = 0.1728 + 01872 + 0.3072 + 0.3328 = 1\]
The sum of these probabilities is 1 because these are all the possible combinations we can have for the candy; soft and sweet, soft and sour, hard and sweet, or hard and sour.
Is it possible for two events \(A\) and \(B\) with non-zero probabilities to be both disjoint and independent? Explain. If it is possible then provide an example.
Let us assume that two events \(A\) and \(B\) with non-zero probabilities are both disjoint and independent. Because they are independent, by definition, the events have no effect on each other whatsoever. Also, because they are disjoint, by definition, it is impossible for both \(A\) and \(B\) to occur at the same time. That means if \(A\) occurs, then \(B\) does not occur, and if \(B\) occurs, then \(A\) does not occur. But that means that the events \(A\) and \(B\) do affect one another, which contradicts our assumption that \(A\) and \(B\) are independent. Thus, it is impossible for any events \(A\) and \(B\) with non-zero probabilities to be both disjoint and independent.
Note: that if either \(P(A) = 0\) or \(P(B) = 0\), then it is possible for \(A\) and \(B\) to be disjoint and independent.
Suppose you are on a game show and are presented with \(10\) doors to pick from, where you win whatever is behind the door you pick. Behind one of these doors is a cash prize and behind the rest of the doors is nothing. Each door has the same probability of having the money behind it. Before you are able to pick a door, \(6\) of the doors that have nothing behind them are removed. You then pick one of the remaining doors. Once you’ve made your choice, even more doors with nothing behind them are removed until there are only \(2\) left: the door you picked, and one other door. You are then given the option of staying with the door you picked, or swapping it with the other door. What is the probability of winning the cash prize if you stay with your door? What is the probability of winning the cash prize if you swap doors? Show your work.
Initially, there are \(10\) doors to choose from, with each of them having a \(\frac{1}{10} = 0.1\) or \(10\)% probability of having the cash prize behind them. Then \(6\) of the doors are removed before a choice can be made, meaning that there are \(10 - 6 = 4\) doors left to pick from. Since the 6 doors were removed before a choice was made, the remaining 4 doors still all have an equal chance of having the money behind them. Specifically, the probability for each door is \(\frac{1}{4} = 0.25\) or \(25\)%.
Let us label the doors as \(A\), \(B\), \(C\) and \(D\), and suppose we pick door \(A\). We know that the probability of the money being behind door \(A\) is \(P(A) = 0.25\), and similarly, \(P(B) = 0.25\), \(P(C) = 0.25\) and \(P(D) = 0.25\). This means that the probability of the money not being behind door \(A\) is \(P(\overline{A}) = 1 - P(A) = 1 - 0.25 = 0.75\).
Now, even more doors with nothing behind them are removed until only \(2\) doors remain: the one we picked, and one other. Let us assume these are door \(A\) and door \(B\), which means that door \(C\) and door \(D\) were removed. We are now asked if we wish to stay with door \(A\), or swap to door \(B\). Even though there are only \(2\) doors left, they don’t have the same probability of having the money behind them. This is because we picked door \(A\) when there were \(4\) doors, so even though there are 2 doors now, we still have \(P(A) = 0.25\) and \(P(\overline{A}) = 0.75\). We know that \(\overline{A}\) = the money is not behind door \(A\) and since door \(B\) is the only door remaining, we must have that \(\overline{A} = B\) now. Thus, we have the new probability of door \(B\) to be \(P(B) = P(\overline{A}) = 0.75\). So, staying with door \(A\) gives us a \(0.25\) chance of winning the money, and swapping to door \(B\) gives us a \(0.75\) chance of winning the money.