In this talk, we discuss two problems: the sampling problem for a given class of functions on $\mathbb{R}$ (direct problem) and the reconstruction of a function from finite samples (inverse problem). In the sampling problem, we are given a class of functions $V\subset L^2(\mathbb{R})$ and one seeks to find sets of discrete samples such that any $f$ in $V$ can be completely recovered from its values at the sample points. Here we address the sampling problem for the class of functions $V = V(\varphi)$, the (integer) shift invariant space defined by a generator $\varphi$ with some general assumption. In the second problem, we discuss the problem of reconstruction of a real-valued function $f$ on $X\subset \mathbb{R}^d$ from the given data $\{(x_i,y_i)\}_{i=1}^n\subset X\times\mathbb{R}$, where it is assumed that $y_i=f(x_i)+\xi_i$ and $(\xi_1,…,\xi_n)$ is a noise vector. In particular, we are interested in reconstructing the function at points outside the closed convex hull of $\{x_1,…,x_n\}$, which is the so-called extrapolation problem. We consider this problem in the framework of statistical learning theory and regularization networks. In this framework, we address the major issues: how to choose an appropriate hypothesis space and regularized predictor for given data through a meta-learning approach. We employ the proposed method for blood glucose prediction in diabetes patients. Further, using real clinical data, we demonstrate that the proposed method outperforms the state-of-art (time series and neural-network-based models).

- All seminars.
- Seminars for 2011

Last updated: 06 Mar 2020