Cumulative distributions will also be referred to as simply distribution functions or distributions. We start by giving two large classes of CDFs. There are CDFs that do not belong to either of these classes, but for practical purposes they may be ignored (for now).
To interpret $f(a)$, take a small positive number $\delta$ and look at $$ F(a+\delta)-F(a) = \int\limits_{a}^{a+\delta}f(u) du \approx \delta f(a). $$ In other words, $f(a)$ measures the chance of the random variable taking values near $a$. Higher the pdf, greater the chance of taking values near that point.
Among distributions with pmf, we have seen the Binomial, Poisson, Geometric and Hypergeometric families of distributions. Now we give many important examples of distributions (CDFs) with densities.
We said that the normal CDF has no simple expression, but is it even clear that it is a CDF?! In other words, is the proposed density a true pdf? Clearly $\varphi(t)=\frac{1}{\sqrt{2\pi} }e^{-t^{2}/2}$ is non-negative. We need to check that its integral is $1$.
When $\nu=1$, we get back the exponential distribution. Thus, the Gamma family subsumes the exponential distributions. For positive integer values of $\nu$, one can actually write an expression for the CDF of Gamma($\nu,\lambda$) as (this is a homework problem) $$ F_{\nu,\lambda}(t)=1-e^{-\lambda t}\sum\limits_{k=0}^{\nu-1}\frac{(\lambda t)^{k} }{k!}. $$ Once the expression is given, it is easy to check it by induction (and integration by parts). A curious observation is that the right hand side is exactly $\mathbf{P}(N\ge \nu)$ where $N\sim \mbox{Pois}(\lambda t)$. This is in fact indicating a deep connection between Poisson distribution and the Gamma distributions. The function $\Gamma(\nu)$, also known as Euler's Gamma function, is an interesting and important one and occurs all over mathematics. 1 The Gamma function: The function $\Gamma:(0,\infty)\rightarrow \mathbb{R}$ defined by $\Gamma(\nu)=\int_{0}^{\infty}e^{-t}t^{\nu-1}dt$ is a very important function that often occurs in mathematics and physics. There is no simpler expression for it, although one can find it explicitly for special values of $\nu$. One of its most important properties is that $\Gamma(\nu+1)=\nu\Gamma(\nu)$. To see this, consider $$ \Gamma(\nu+1)=\int_{0}^{\infty}e^{-t}t^{\nu}dt = -e^{-t}t^{\nu}\left.\vphantom{\hbox{\Large (}}\right|_{0}^{\infty}+\nu\int_{0}^{\infty}e^{-t}t^{\nu-1}dt = \nu \Gamma(\nu). $$ Starting with $\Gamma(1)=1$ (direct computation) and using the above relationship repeatedly one sees that $\Gamma(\nu)=(\nu-1)!$ for positive integer values of $\nu$. Thus, the Gamma function interpolates the factorial function (which is defined only for positive integers). Can we compute it for any other $\nu$? The answer is yes, but only for special values of $\nu$. For example, \[\begin{aligned} \Gamma(1/2)= \int_{0}^{\infty}x^{-1/2}e^{-x}dx = \sqrt{2}\int_{0}^{\infty}e^{-y^{2}/2}dy \end{aligned}\] by substituting $x=y^{2}/2$. The last integral was computed above in the context of the normal distribution and equal to $\sqrt{\pi/2}$. Hence we get $\Gamma(1/2)=\sqrt{\pi}$. From this, using again the relation $\Gamma(\nu+1)=\nu\Gamma(\nu)$, we can compute $\Gamma(3/2)=\frac{1}{2}\sqrt{\pi}$, $\Gamma(5/2)=\frac{3}{4}\sqrt{\pi}$, etc. Yet another useful fact about the Gamma function is its asymptotics as $\nu\rightarrow\infty$.
Stirling's approximation: $\frac{\Gamma(\nu+1)}{\nu^{\nu+\frac{1}{2} }e^{-\nu}\sqrt{2\pi} }\rightarrow 1$ as $\nu\rightarrow \infty$.
A small digression : It was Euler's idea to observe that $n!=\int_{0}^{\infty}x^{n}e^{-x}dx$ and that on the right side $n$ could be replaced by any real number greater than $-1$. But this was his second approach to defining the Gamma function. His first approach was as follows. Fix a positive integer $n$. Then for any $\ell\ge 1$ (also a positive integer), we may write \[\begin{aligned} n!=\frac{(n+\ell)!}{(n+1)(n+2)\ldots (n+\ell)} = \frac{\ell!(\ell+1)\ldots (\ell+n)}{(n+1)\ldots (n+\ell)} = \frac{\ell! \ell^{n} }{(n+1)\ldots (n+\ell)}\cdot\frac{(\ell+1)\ldots (\ell+n)}{\ell^{n} } \end{aligned}\] The second factor approaches $1$ as $\ell\rightarrow \infty$. Hence, \[\begin{aligned} n!=\lim_{\ell\rightarrow \infty}\frac{\ell! \ell^{n} }{(n+1)\ldots (n+\ell)}. \end{aligned}\] Euler then showed (by a rather simple argument that we skip) that the limit on the right exists if we replace $n$ by any complex number other than $\{-1,-2,-3,\ldots \}$ (negative integers are a problem as they make the denominator zero). Thus, he extended the factorial function to all complex numbers except negative integers! It is a fun exercise to check that this agrees with the definition by the integral given earlier. In other words, for $\nu > -1$, we have \[\begin{aligned} \lim_{\ell\rightarrow \infty}\frac{\ell! \ell^{\nu} }{(\nu+1)\ldots (\nu+\ell)}=\int_{0}^{\infty}x^{\nu}e^{-x}dx. \end{aligned}\]
Now for any other positive integer value of $\alpha$ and real $\beta > 0$ we can integrate by parts and get $$\begin{align*} B(\alpha,\beta)&=\int_{0}^{1}t^{\alpha-1}(1-t)^{\beta-1}dt \\ &= -\frac{1}{\beta}t^{\alpha-1}(1-t)^{\beta}\left.\vphantom{\hbox{\Large (}}\right|_{0}^{1} + \frac{\alpha-1}{\beta}\int_{0}^{1}t^{\alpha-2}(1-t)^{\beta}dt \\ &= \frac{\alpha-1}{\beta}B(\alpha-1,\beta+1). \end{align*}$$ Note that the first term vanishes because $\alpha > 1$ and $\beta > 0$. When $\alpha$ is an integer, we repeat this for $\alpha$ times and get $$ B(\alpha,\beta)=\frac{(\alpha-1)(\alpha-2)\ldots 1}{\beta(\beta+1)\ldots (\beta+\alpha-2)}B(1,\beta+\alpha-1). $$ But we already checked that $B(1,\beta+\alpha-1)=\frac{\Gamma(1)\Gamma(\alpha+\beta-1)}{\Gamma(\alpha+\beta)}$ from which we get $$ B(\alpha,\beta) = \frac{(\alpha-1)(\alpha-2)\ldots 1}{\beta(\beta+1)\ldots (\beta+\alpha-2)}\frac{\Gamma(1)\Gamma(\alpha+\beta-1)}{\Gamma(\alpha+\beta)} =\frac{\Gamma(\alpha)\Gamma(\beta)}{\Gamma(\alpha+\beta)} $$ by the recursion property of the Gamma function. Thus we have proved the proposition when $\alpha$ is a positive integer. By symmetry the same is true when $\beta$ is a positive integer (and $\alpha$ can take any value). We do not bother to prove the proposition for general $\alpha,\beta > 0$ here.