We sketch a strategy to prove the Tate conjecture on algebraic cycles for a good amount of quaternionic Shimura varieties. A key point is a twisted adjoint L-value formula relative to each quaternion algebra $D/F$ for a totally real field $F$ and its scalar extension $B=D\otimes_F E$ for a totally real quadratic extension $E_{/F}$. The theta base-change lift $\mathcal{F}$ of a Hilbert modular form $f$ to $B^\times$ has period integral over the Shimura subvariety $Sh_D\subset Sh_B$ given by $L(1,Ad(f)\otimes(\frac{E/F}{}))\ne0$; so, $Sh_D$ gives rise to a non-trivial Tate cycle in $H^{2r}(Sh_B,\mathbb{Q}_l(r))$ for $r=\dim Sh_D=\dim Sh_B/2$.
First passage percolation (FPP) gives a well-known model of random geometry on a fixed background infinite graph. When we specialize to Cayley graphs of Gromov-hyperbolic groups $G$, random trees $T$ emerge naturally. The first part of the talk will dwell on setting up hyperbolic FPP and outlining its basic properties. This will have a probabilistic emphasis.
In the second part, we will specialize to the study of exceptional directions, i.e. distinct random geodesics in $T$ that converge asymptotically to the same point in the boundary $\partial G$ of $G$. This will have a geometric group theoretic emphasis (joint work with Riddhipratim Basu).
If time permits, we will describe how to reconstruct the bulk random hyperbolic geometry from the boundary. Random trees leave a trace on the boundary $\partial G$ of $G$ in the form of an evolving random partition. It turns out that there is an inverse construction, where the random metric can be reconstructed (up to bounded errors) from the data of an evolving random partition on the boundary.
Random processes with strong memory and/or self-excitation properties arise naturally in various disciplines including physics, economics, biology, engineering, geology, etc. Many of these processes exhibit superdiffusive growth due to the effect of self-excitation. In order to model such situations, two statistical physicists Schütz and Trimper (2004) introduced a class of processes called Elephant Random Walks, in which the random walker (an elephant 🐘 with strong memory!) remembers the past steps and repeats them with some probability. In the first half of this talk, we shall discuss the basics of random walks and the elephant random walk in a lucid language. The second half will focus on a couple of models investigated by the speaker and his collaborators leading to phase transition results that verify a bunch of conjectures of Saha (2022).
Special care will be taken so that students can follow both halves of this talk. Most of the notions will be introduced and nothing more than basic probability will be prerequisite for this talk. (Based on collaborations with A. Haldar, K. Maulik, S. S. Manna and T. Sadhukhan.)
Elementary root separation arguments imply that a non-negative polynomial on the real line is a sum of two squares of polynomials. A non-constructive and ingenious enumerative geometry observation of Hilbert shows that in two or more variables, non-negative polynomials are not always sums of squares. This led to Hilbert’s 17-th problem, asking whether such a decomposition is possible in the field of rational functions. The answer is yes, due to Emil Artin. The gap between polynomials which are positive on semi-algebraic sets and corresponding weighted sums of squares was elucidated by Tarski’s elimination of quantifiers principle. The first part of the lecture will contain accessible details and historical notes on these topics, now part of Real Algebra and Real Algebraic Geometry.
In the second part of the lecture, I will show how sums of squares decompositions led F. Riesz to the definitive form of the spectral theorem for self-adjoint transforms of a Hilbert space. Implying for instance novel positivity results of harmonic analysis. Reversing the historical arrow, I will show how operator theory has put some essential marks on purely Real Algebra chapters. We will also touch positivity in non-commutative *-algebras and Lie-algebras.
The third part of the lecture will contain applications of relatively recent positivity certificates to global, non-convex optimization, stability of dynamical systems and construction of wavelet frames.
The video of this talk is available on the IISc Math Department channel.
“Explicit” number theory is the name given to the study of what are called zero-free regions of the Riemann zeta function and other L-functions. An explicit determination of such regions often reveals deep arithmetic properties of the underlying object attached to the concerned L-function. More generally, it could refer to the use of “explicit”, often technical methods to understand an arithmetic object. On the other hand, “probabilistic” number theory attempts to investigate arithmetic properties of an object by treating the object as one in a family of many, and exploring these families of varying sizes through the viewpoint of probability. In this talk, we will explore both perspectives and compare the wealth of information each perspective presents to us. In particular, we will address the study of Fourier coefficients of certain modular forms (called the Hecke eigenforms) through both the above viewpoints.
The problem of maximum packing density of the $n$-dimensional real space with spheres is a classic one. Exact values of the density are known only in a few cases ($n=1,2,3,8,24$), and there have been several recent improvements of the bounds for other small dimensions. In a parallel development, researchers have studied the maximum size of packings of the $n$-dimensional Hamming space, known as error-correcting codes. While existence bounds in both cases are found by random choice, the best known impossibility results are obtained by an application of a general method commonly known as Delsarte’s linear programming. The best known upper bound on the maximum size of a code with a given minimum distance for large $n$ was obtained in 1977, and it has proved surprisingly resistant to various improvement attempts, including the semidefinite programming extension of LP.
In the first part of the talk we will introduce the general problem and give an overview of the known results on upper bounds on codes and related problems such as equiangular lines and families of finite sets with restricted intersections. In the second part, we will delve into the details of the proofs for the case of codes and highlight some obstacles for further improvements.
More than 50 years ago, Fuchs asked which abelian groups can be the group of units of a commutative ring. Though progress has been made, the question remains open. After introducing the problem and known results in the first part of the talk, I will present an overview of my joint work with Keir Lockridge on this problem. We answered this question for various classes of groups including indecomposable abelian groups, dihedral groups, quaternion groups, and some $p$-groups. This work also gave us several new characterizations of Mersenne primes and Fermat primes.
The Brownian motion is the scaling limit of random walks where the step distribution has finite second moment. Various random objects constructed from the Brownian motion, e.g., the Brownian continuum random tree and the Brownian map, arise naturally in the study of random trees, graphs, and maps. In the first talk, we will give a gentle introduction to these objects. In the second talk, we will discuss some recent advances in establishing certain Brownian objects as the scaling limits of different models of random discrete structures.
Define $g(k) = \min \{ s :$ every positive integer can be written as a sum of $k$th powers of natural numbers with atmost $s$ summands$\}$. Lagrange proved that $g(2) = 4$. Waring conjectured that $g(3) = 9, g(4) = 19$ and so on.
In fact, in this question, there has been a lot of contribution from Indian mathematicians. The method of attacking this problem is called the circle method and it originates from a seminal paper of Hardy and Ramanujan. The final result owes a lot to the contributions of S.S. Pillai. The analogous question over number fields was settled by C.P. Ramanujam. We shall explain their contributions toward this problem.
For any $1 < q <\infty,$ standard representation formulas and the Calderon–Zygmund estimates imply $u \in W^{2,q}_{\text{loc}}\left(\mathbb{R}^{n}\right)$ if $\Delta u \in L^{q}_{\text{loc}}(\mathbb{R}^{n}).$ Combined with the Sobolev–Morrey embeddings for $q>n,$ we deduce that $\nabla u$ is locally Hölder continuous. However, as soon as we pass from the linear case to the quasilinear operator, we no longer have any representation formula for the solution of the following problem \begin{equation} {-}{\rm div}\left(\left\lvert \nabla u \right\rvert^{p-2}\nabla u\right) = f \end{equation} if $p \neq 2$ and CZ estimates for second derivatives of the solution are not yet known. In fact, the solution can fail to be $C^{2}$ even when $f \equiv 0.$
However, one can still establish Hölder continuity of the gradient whenever
${\rm div}\left(\left\lvert \nabla u \right\rvert^{p-2}\nabla u\right) \in L^{q}_{\text{loc}}$ and $q>n.$ These type of results are often called
“Nonlinear Calderon–Zygmund theory”, as the regularity for the gradient is the same, i.e. “as if” Calderon–Zygmund estimates for second derivatives are
valid! This result relies heavily on a fundamental regularity result, commonly known as the DeGiorgi–Nash–Moser estimate, for $p$-harmonic functions.
However, such regularity results are specific to equations and are in general false for elliptic systems. In another groundbreaking work, Uhlenbeck extended
gradient Hölder continuity estimates for solutions to special type of systems, which includes the homogeneous $p$-Laplacian systems.
In this lecture, I would sketch the main ideas involved to establish nonlinear Calderon–Zygmund theory for scalar equations and elliptic systems with Uhlenbeck
structures. In the second half, I would discuss how to extend these estimates to the following $p$-Laplacian type system for vector-valued differential forms
\begin{equation}
d^{\ast}\left(\left\lvert d u \right\rvert^{p-2}d u\right) = f.
\end{equation}
This includes systems which are, strictly speaking, even non-elliptic.
The Macdonald polynomials are a homogeneous basis for the algebra of symmetric polynomials, which generalize many important families of special functions, such as Schur polynomials, Hall-Littlewood polynomials, and Jack polynomials.
The interpolation polynomials, introduced by F. Knop and the speaker, are an inhomogeneous extension of Macdonald polynomials, which are characterized by very simple vanishing properties.
The binomial coefficients are special values of interpolation polynomials, which play a central role in the higher rank $q$-binomial theorem of A. Okounkov.
We will give an elementary self-contained introduction to all three objects, and discuss some recent results, open problems, and applications.
Associated to two given sequences of eigenvalues is a natural polytope, the polytope of augmented hives with the specified boundary data, which is associated to sums of random Hermitian matrices with these eigenvalues. As a first step towards the asymptotic analysis of random hives, we show that if the eigenvalues are drawn from the GUE ensemble, then the associated augmented hives exhibit concentration as the number of eigenvalues tends to infinity.
Our main ingredients include a representation due to Speyer of augmented hives involving a supremum of linear functions applied to a product of Gelfand–Tsetlin polytopes; known results by Klartag on the KLS conjecture in order to handle the aforementioned supremum; covariance bounds of Cipolloni–Erdös–Schröder of eigenvalue gaps of GUE; and the use of the theory of determinantal processes to analyze the GUE minor process. This is joint work with Scott Sheffield and Terence Tao.
In the first part of the talk, we will discuss the main statement of local class field theory that describes the abelian extensions of a non-archimedean local field $F$ in terms of the arithmetic of the field $F$. Then we will discuss the statement of the local Langlands conjectures, a vast generalization of local class field theory, that gives a (conjectural) parametrization of the irreducible complex representations of $G(F)$, where $G$ is a connected, reductive group over $F$, in terms of certain Galois representations. We will then discuss a philosophy of Deligne and Kazhdan that loosely says that to obtain such a parametrization for representations of $G(F’)$, with $F’$ of characteristic $p$, it suffices to obtain such a parametrization for representations of $G(F)$ for all local fields $F$ of characteristic $0$. In the second half of the talk, we will mention some instances where the Deligne-Kazhdan philosophy has been applied successfully to obtain a Langlands parametrization of irreducible representations of $G(F’)$ in characteristic $p$ and focus on some recent work on variants/generalizations of the work of Kazhdan.
Non-normal matrices are ubiquitous in various branches of science, such as fluid dynamics, mathematical physics, partial differential equations, and many more. Non-normality causes notorious sensitivity of the eigenvalues, and the eigenvalue analysis often turns out to be misleading. These motivate the study of pseudospectrum, and the spectral properties of random perturbation of non-normal matrices. In the first part of the talk, we will introduce these issues and their resolutions through some fun experiments and simulations. In the latter half, we will move to describe spectral properties of random perturbations of non-normal Toeplitz matrices, where over the last few years a coherent theory has emerged.
Of fundamental importance in number theory is the question of non-vanishing of central L-values of L-functions. One approach, explained in the talk, is to make use of the Relative trace formula (which will be introduced from scratch); a basic example of interest involves twists of L-functions of classical modular forms. If time permits, we will explain the recent work with Michel and Yang on $U(2)$-twists of $U(3)$ L-functions.
The most fundamental objects in number theory are number fields, field extensions of the rational numbers that are finite dimensional as vector spaces over $\mathbb{Q}$. Their arithmetic is governed heavily by certain invariants such as the discriminant, Artin conductors, and the class group; for example, the ring of integers inside a number field has unique prime factorization if and only if its class group is trivial. The behavior of these invariants is truly mysterious: it is not known how many number fields there are having a given discriminant or conductor, and it is an open conjecture dating back to Gauss as to how many quadratic fields have trivial class group.
Nonetheless, one may hope for statistical information regarding these invariants of number fields, the most basic such question being “How are such invariants distributed amongst number fields of degree $d$?” To obtain more refined asymptotics, one may fix the Galois structure of the number fields in question. There are many foundational conjectures that predict the statistical behavior of these invariants in such families; however, only a handful of unconditional results are known. In this talk, I will describe a combination of algebraic, analytic, and geometric methods to prove many new instances of these conjectures, including some joint results with Altug, Bhargava, Ho, Shankar, and Wilson.
We classify similarity classes of tetrahedra whose dihedral angles are all rational multiples of $\pi$ (when measured in radians), answering a question of Conway-Jones from 1976. In the process, we also classify collections of vectors in $\mathbb{R}^3$ whose pairwise angles are rational. The proof uses a mixture of theoretical arguments, exact computations in computer algebra, and floating-point numerical computations. (Joint with Alexander Kolpakov, Bjorn Poonen, and Michael Rubinstein.)
For decades, mathematicians have been using computers to calculate. More recently there has been some interest in trying to get them to reason. What is the difference? An example of a calculation: compute the first one million prime numbers. An example of reasoning: prove that there are infinitely many prime numbers. Tools like ChatGPT can prove things like this, because they have seen many proofs of it on the internet. But can computers help researchers to come up with new mathematics? Hoping that a computer will automatically prove the Riemann Hypothesis is still science fiction. But new tools and methods are becoming available. I will give an overview of the state of the art.
(This is a Plenary talk in the EECS Research Students’ Symposium)
From the longest increasing subsequence in a random permutation to the shortest distance in a randomly weighted two dimensional Euclidean lattice, a large class of planar random growth models are believed to exhibit shared large scale features of the so-called Kardar-Parisi-Zhang (KPZ) universality class. Over the last 25 years, intense mathematical activity has led to a lot of progress in the understanding of these models, and connections to several other topics such as algebraic combinatorics, random matrices and partial differential equations have been unearthed. I shall try to give an elementary introduction to this area, describe some of what is known as well as many questions that remain open.
While statistical decision theory led me to game theory, certain war duel models, and the close connection between the Perron–Frobenius theorem and game theory led me to the works of M.G. Krein on special classes of cones, and spectral properties of positive operators. The influence of Professors V.S. Varadarajan, K.R Parthasarathy and S.R.S Varadhan in early 60’s at ISI is too profound to many of us as young graduate students in 1962-66 period. The talk will highlight besides the theorems, the teacher-student interactions of those days.
In the first half of the talk I will recall two classical theorems - Dirichlet’s class number formula and Stickelberger’s theorem. Stark and Brumer gave conjectural generalisations of these statements. We will see formulations of some of these conjectures. In the second half of the talk we will restrict to a special case of the Brumer-Stark conjecture. Here p-adic techniques can be used to resolve the conjecture. We will see a sketch of this proof. This is joint work with Samit Dasgupta.
It is commonly expected that $e$, $\log 2$, $\sqrt{2}$, $\pi$, among other “classical” numbers, behave, in many respects, like almost all real numbers. For instance, they are expected to be normal to base 10, that is, one believes that their decimal expansion contains every finite block of digits from ${0, \ldots , 9}$. We are very far away from establishing such a strong assertion. However, there has been some small recent progress in that direction. After surveying classical results and problems on normal numbers, we will adopt a point of view from combinatorics on words and show that the decimal expansions of $e$, of any irrational algebraic number, and of $\log (1 + \frac{1}{a})$, for a sufficiently large integer $a$, cannot be ‘too simple’, in a suitable sense.
Solitons are solutions of a special class of nonlinear partial differential equations (soliton equations, the best example is the KdV equation). They are waves but behave like particles. The term “soliton” combining the beginning of the word “solitary” with ending “on” means a concept of a fundamental particle like “proton” or “electron”.
The events: (1) sighting, by chance, of a great wave of translation, “solitary wave”, in 1834 by Scott–Russell, (2) derivation of KdV equation by Korteweg de Vries in 1895, (3) observation of a very special type of wave interactions in numerical experiments by Kruskal and Zabusky in 1965, (4) development of the inverse scattering method for solving initial value problems by Gardener, Greene, Kruskal and Miura in 1967, (5) formulation of a general theory in 1968 by P.D. Lax and (6) contributions to deep theories starting from the work by R. Hirota (1971-74) and David Mumford (1978-79), which also gave simple methods of solutions of soliton equations, led to the development of one of most important areas of mathematics in the 20th century.
This also led to a valuable application of solitons to physics, engineering and technology. There are two aspects of soliton theory arising out of the KdV Equation:
The subject is too big but I shall try to give some glimpses (1) of the history, (2) of the inverse scattering method, and (3) show that an algorithm based on algebraic-geometric approach is much easier to derive soliton solutions.
The Riemann hypothesis provides insights into the distribution of prime numbers, stating that the nontrivial zeros of the Riemann zeta function have a “real part” of one-half. A proof of the hypothesis would be world news and fetch a $1 million Millennium Prize. In this lecture, Ken Ono will discuss the mathematical meaning of the Riemann hypothesis and why it matters. Along the way, he will tell tales of mysteries about prime numbers and highlight new advances. He will conclude with a discussion of recent joint work with mathematicians Michael Griffin of Brigham Young University, Larry Rolen of Georgia Tech, and Don Zagier of the Max Planck Institute, which sheds new light on this famous problem.
The sphere packing problem asks for the densest packing by congruent non-overlapping spheres in n dimensions. It is a famously hard problem, though easy to state, and with many connections to different parts of mathematics and physics. In particular, every dimension seems to have its own idiosyncracies, and until recently no proven answers were known beyond dimension 3, with the 3-dimensional solution being a tour de force of computer-aided mathematics.
Then in 2016, a breakthrough was achieved by Viazovska, solving the sphere packing problem in 8 dimensions. This was followed shortly by joint work of Cohn-Kumar-Miller-Radchenko-Viazovska solving the sphere packing problem in 24 dimensions. The solutions involve linear programming bounds and modular forms. I will attempt to describe the main ideas of the proof.
Modular forms are certain functions defined on the upper half plane that transform nicely under $z\to -1/z$ as well as $z\to z+1$. By a modular relation (or a modular-type transformation) for a certain function $F$, we mean that which is governed by only the first map, i.e., $z\to -1/z$. Equivalently, the relation can be written in the form $F(\alpha)=F(\beta)$, where $\alpha \beta = 1$. There are many generalized modular relations in the literature such as the general theta transformation $F(w,\alpha) = F(iw, \beta)$ or other relations of the form $F(z, \alpha) = F(z, \beta)$ etc. The famous Ramanujan-Guinand formula, equivalent to the functional equation of the non-holomorphic Eisenstein series on ${\rm SL}_{2}(\mathbb{Z})$, admits a beautiful generalization of the form $F(z, w,\alpha) = F(z, iw, \beta)$ recently obtained by Kesarwani, Moll and the speaker. This implies that one can superimpose the theta structure on the Ramanujan-Guinand formula.
The current work arose from answering a similar question - can we superimpose the theta structure on a recent modular relation involving infinite series of the Hurwitz zeta function $\zeta(z, a)$ which generalizes an important result of Ramanujan? In the course of answering this question in the affirmative, we were led to a surprising new generalization of $\zeta(z, a)$. This new zeta function, $\zeta_w(z, a)$, satisfies interesting properties, albeit they are much more difficult to derive than the corresponding ones for $\zeta(z, a)$. In this talk, I will briefly discuss the theory of the Riemann zeta function $\zeta(z)$, the Hurwitz zeta function $\zeta(z, a)$ and then describe the theory of $\zeta_w(z, a)$ that we have developed. In order to obtain the generalized modular relation (with the theta structure) satisfied by $\zeta_w(z, a)$, one not only needs this theory but also has to develop the theory of reciprocal functions in a certain kernel involving Bessel functions as well as the theory of a generalized modified Bessel function. (Based on joint work with Rahul Kumar.)
Consider a stochastic matrix $P$ for which the Perron–Frobenius eigenvalue has multiplicity larger than 1, and for $\epsilon > 0$, let
\begin{equation} P^\epsilon := (1 - \epsilon) P + \epsilon Q, \end{equation}
where $Q$ is a stochastic matrix for which the Perron–Frobenius eigenvalue has multiplicity 1. Let $\pi^\epsilon$ be the Perron–Frobenius eigenfunction for $P^\epsilon$. We will discuss the behavior of $\pi^\epsilon$ as $\epsilon \to 0$.
This was an important ingredient in showing that if two players repeatedly play Prisoner’s Dilemma, without knowing that they are playing a game, and if they play rationally, they end up cooperating. We will discuss this as well in the second half.
The talk will include the required background on Markov chains.
Abstract:
The mathematics of Monge–Kantorovich optimal transport (OT) has grown to be a unifying theme in many scientific disciplines, from purely mathematical areas such as analysis, geometry, and probability to revolutionary new methods in economics, statistics, machine learning and artificial intelligence. Much of these are due to impressive leaps in computational methods in OT that hinge on entropy based approximations. For dynamical OT such relaxations correspond to stochastic processes called Schrödinger bridges. This is an introductory talk on OT, entropic relaxations, and Schrödinger bridges that will attempt to give a flavor of this "hot area" by sewing through various recent advances.
Abstract:
Consider a Monge–Kantorovich problem of transporting densities with a strictly convex cost function. The entropic relaxation cost is known to converge to the cost of optimal transport. We are interested in the difference between the two, suitably scaled. We show that this limit is always given by the relative entropy of the target density with respect to a Riemannian volume measure that measures the local sensitivity of the Monge map. In the special case of the quadratic Wasserstein transport, this relative entropy is exactly one half of the difference of entropies of the target and initial densities. The proofs are based of Gaussian approximations to Schrödinger bridges which can be interpreted as a higher order large deviation.
A diagonalizable matrix has linearly independent eigenvectors. Since the set of nondiagonalizable matrices has measure zero, every matrix is the limit of diagonalizable matrices. We prove a quantitative version of this fact: every n x n complex matrix is within distance delta of a matrix whose eigenvectors have condition number poly(n)/delta, confirming a conjecture of E. B. Davies. The proof is based on regularizing the pseudospectrum with a complex Gaussian perturbation.
Joint work with J. Banks, A. Kulkarni, S. Mukherjee.
The Pick–Nevanlinna problem refers to the problem of – given two connected open sets in complex Euclidean spaces and finite sets of distinct points in each – characterizing (in terms of the given point data) the existence of a holomorphic map between the two sets that interpolates the given points. The problem gets its name from Pick – who provided a beautiful characterization for the existence of an interpolant when the domain and the co-domain are the unit disc – and from Nevanlinna, who rediscovered Pick’s result. This characterization is in terms of matrix positivity. I shall begin by presenting an easy argument by Sarason, which he says is already implicit in Nevanlinna’s work, for the necessity of the Pick–Nevanlinna condition. How does one prove the sufficiency of the latter condition? Sarason’s ideas have provided the framework for a long chain of complicated characterizations for the more general problem. The last word on this is still to be written. But in the original set-up of Pick, geometry provides the answer. Surprisingly, Pick did not notice that his approach provides a very clean solution to the interpolation problem where the co-domain is any Euclidean ball. We shall see a proof of the latter. This proof uses a general observation about conditional positivity (which also made an appearance in the previous talk), which is attributed to Schur. If time permits, we shall see what can be said when the co-domain is a bounded symmetric domain.
I will present a historical account of some work of Schoenberg in metric geometry: from his metric space embeddings into Euclidean space and into spheres (Ann. of Math. 1935), to his characterization of positive definite functions on spheres (Duke Math. J. 1942). It turns out these results can be viewed alternately in terms of matrix positivity: from appearances of (conditionally) positive matrices in analysis, to the classification of entrywise positivity preservers in all dimensions.
According to Wikipedia, a continued fraction is “an expression obtained through an iterative process of representing a number as the sum of its integer part and the reciprocal of another number, then writing this other number as the sum of its integer part and another reciprocal, and so on.” It has connections to various areas of mathematics including analysis, number theory, dynamical systems and probability theory.
Continued fractions of numbers in the open interval (0,1) naturally gives rise to a dynamical system that dates back to Gauss and raises many interesting questions. We will concentrate on the extreme value theoretic aspects of this dynamical system. The first work in this direction was carried out by the famous French-German mathematician Doeblin (1940), who rightly observed that exceedances of this dynamical system have Poissonian asymptotics. However, his proof had a subtle error, which was corrected much later by Iosifescu (1977). Meanwhile, Galambos (1972) had established that the scaled maxima of this dynamical system converges to the Frechet distribution.
After a detailed review of these results, we will discuss a powerful Stein-Chen method (due to Arratia, Goldstein and Gordon (1989)) of establishing Poisson approximation for dependent Bernoulli random variables. The final part of the talk will be about the application of this Stein-Chen technique to give upper bounds on the rate of convergence in the Doeblin-Iosifescu asymptotics. We will also discuss consequences of our result and its connections to other important dynamical systems. This portion of the talk will be based on a joint work with Maxim Sølund Kirsebom and Anish Ghosh.
Familiarity with standard (discrete and absolutely continuous) probability distributions will be sufficient for this talk.
As a natural generalization of the classical uniformization theorem, one could ask if every conformal class of Riemannian metrics contains a metric with constant scalar curvature. In 1960, Yamabe attempted to solve the problem, and gave an incorrect “proof” of an affirmative answer to the above question. Yamabe’s idea was to to use techniques from calculus of variations and partial differential equations to minimize a certain functional, now named after him. Building on some work of Trudger, Aubin solved the Yamabe problem under an additional analytic condition, namely that the Yamabe invariant of the manifold is strictly smaller than the corresponding for the n-dimensional sphere. Aubin was able to verify this analytic condition for all manifolds of dimension strictly greater than 5 that are not locally conformally flat. Most remarkably, the problem was finally solved by Schoen in 1984 using the newly proven positive mass theorem of Schoen and Yau, which in turn has origins in general theory of relativity. A curious feature of Schoen’s proof is that it works precisely in the cases left out by Aubin’s result.
In the first part of the talk, I will provide some background in Riemannian geometry, and then describe the developments surrounding the Yamabe problem until Aubin’s work in 1976. A recurring theme will be the role played by the n-dimensional sphere and its conformal transformations, in all aspects of the problem. In the second part of the talk, I will describe the positive mass theorem, and outline Schoen’s proof of the Yamabe problem in low dimensions and for locally conformally flat manifolds. Most of the Riemannian geometric quantities (including scalar curvature, gradients, Laplacians etc.) will be be defined in the talk, and so the hope is that both the talks will be accessible to everyone.
When Harish-Chandra died in 1983, he left behind a voluminous pile of handwritten manuscripts on harmonic analysis on semisimple Lie groups over real/complex and p-adic fields. The manuscripts were turned over to the archives of the Institute for Advanced Study at Princeton, and are archived there.
Robert Langlands is the Trustee of the Harish-Chandra archive, and has always been interested in finding a way of salvaging whatever might be valuable in these manuscripts. Some years ago, at a conference in UCLA, he encouraged V. S. Varadarajan and me to look at these manuscripts.
The results of our efforts have resulted in the publication of the Volume 5 (Posthumous) of the Collected works of Harish-Chandra by Springer Verlag, which appeared in July this year. This volume covers only those manuscripts that deal with Real or Complex groups. The manuscripts dealing with p-adic groups remain in the archive.
My talk will be devoted to an outline of the results in this volume, without much detail. However, I will try to describe the main ingredient of the methods that cut across most of this work.
Symmetric polynomials are interesting not just in their own right. They appear naturally in various contexts, e.g., as characters in representation theory and as cohomology classes of Grassmannians and other homogeneous spaces in geometry. There are many different bases for the space of symmetric polynomials, perhaps the most interesting of which is the one formed by Schur polynomials. The Littlewood-Richardson (LR) rule expresses the product of two given Schur polynomials as a linear combination of Schur polynomials. The first half of the talk will be a tour of these topics that should be accessible even to undergraduate students.
The second half will be an exposition (which hopefully will continue to be widely accessible!) of recent joint research work with Mrigendra Singh Kushwaha and Sankaran Viswanath, both of IMSc. The LR rule above has a natural interpretation as giving the decomposition as a direct sum of irreducibles of the tensor product of two irreducible representations of the unitary (or general linear) group. A generalised version of it (due to Littelmann, still called the LR rule) gives the analogous decomposition for any reductive group. On the tensor product of two irreducible representations, there is the natural “Kostant-Kumar” filtration indexed by the Weyl group. This consists of the cyclic submodules generated by the highest weight vector tensor an extremal weight vector. We obtain a refined LR rule that gives the decomposition as a direct sum of irreducibles of the Kostant-Kumar submodules (of the tensor product). As an application, we obtain alternative proofs of refinements of “PRV type” results proved by Kumar and others. (PRV = Parthasarathy, Ranga Rao, Varadarajan)
Tropical geometry studies combinatorial structures that arise as “shadows” or “skeletons” of algebraic varieties. These skeletons were motivated in part by the mirror symmetry conjectures in mathematical string theory, but have now grown to function broadly as a tool for the study of algebraic varieties. Much of the work in the subject in the past decade has focused on the geometry of moduli spaces of abstract and parameterized algebraic curves, their tropical analogues, and the relationship between the two. This has led to new results on the topology of $M_{g,n}$, the geometry of spaces of elliptic curves, and to classical questions about the geometry of Hurwitz spaces. I will give an introduction to these ideas and recent advances.
In the first part of this talk we introduce a classical family of symmetric polynomials called Schur polynomials and discuss some of their properties, and a problem that arises from their study, namely the combinatorial interpretation of Littlewood–Richardson coefficients.
In the second part, we explain how the work of Robinson, Schensted, Knuth, Lascoux, and Schuetzenberger on words in an ordered alphabet led to a solution of this combinatorial problem. We will then mention some relatively recent developments in this subject.
Weyl’s law gives an asymptotic formula for the number of eigenfunctions of the Laplace-Beltrami operator on a Riemannian manifold. In 2005, Elon Lindenstrauss and Akshay Venkatesh gave a proof of this law for quite general quotients of semisimple Lie groups. The proof crucially uses the fact that solutions of the corresponding wave equation propagate with finite speed. I will try to explain what they did in the simplest setting of the upper half plane. I will also try to explain why such eigenfunctions, also known as automorphic forms, are of central importance in number theory. The first 45 minutes of the talk should be accessible to students who have a knowledge of some basic complex analysis and calculus.
A geodesic conjugacy between two closed manifolds is a homeomorphism between their unit tangent bundles that takes geodesic flow orbits of one to that of the other in a time-preserving manner. One of the central problems in Riemannian geometry is to understand the extent to which a geodesic conjugacy determines a closed Riemannian manifold itself. While an answer to the question in this generality has yet remained elusive, we give an overeview of results on closed surfaces – the most important illustrative case where a complete picture about questions of geodesic conjugacy rigidity is available.
I will present work done with students and colleagues on the collective behaviour of motile organisms, viewed as interacting particles with an autonomous velocity and noise. The talk will include a bit of stochastic processes, some statistical mechanics, and some hydrodynamics. I will discuss experiments, analytical theory and some computation.
Let X1 , X2 , X3, … Xn be iid random variables. Laws of large numbers roughly state that the average of these variables converges to the expectation value of each of them when n is large. Various forms of these laws have many applications. The strong and weak laws along with the following three applications will be discussed: a) Coin-tossing. b) The Weierstrass approximation theorem. c) The Glivenko–Cantelli theorem.
In the second half of this talk, a law of large numbers is proven for spaces with infinite “volume” (measure) as opposed to the above version for probability measures (“volume” =1).
Terence Tao posted on his blog a question of Apoorva Khare, asking whether the free group on two generators has a length function $l: F_2 \to\mathbb{R}$ (i.e., satisfying the triangle inequality) which is homogeneous, i.e., such that $l(g^n) = nl(g)$. A week later, the problem was solved by an active collaboration of several mathematicians (with a little help from a computer) through Tao’s blog. In fact a more general result was obtained, namely that any homogeneous length function on a group $G$ factors through its abelianization $G/[G, G]$.
I will discuss the proof of this result and also the process of discovery (in which I had a minor role).
In this talk, we prove the unique factorization property of Schur functions. This fundamental property of Schur functions was first observed and proved by C. S. Rajan in 2004. I give a different proof of this beautiful fact which I jointly obtained with my adviser S. Viswanath. I begin my talk with introducing the Schur functions and their connections with representation theory of general linear groups. Basic knowledge of elementary algebra will be assumed like group theory and linear algebra. If time permits, I will tell you about the possible generalizations of this result.
We consider the computation of n-variate polynomials over a field F via a sequence of arithmetic operations such as additions, subtractions, multiplications, divisions, etc. It has been known for at five decades now that a random n-variate polynomial of degree n is hard to compute. Yet not a single explicit polynomial is provably known to be hard to compute (although we have a lot of good candidates). In this talk we will first describe this problem and its relationship to the P vs NP problem. We will then describe several partial results on this problem, both old and new, along with a more general approach/framework that ties together most of these partial results.
I will give a gentle historical (and ongoing) account of matrix positivity and of operations that preserve it. This is a classical question studied for much of the past century, including by Schur, Polya-Szego, Schoenberg, Kahane, Loewner, and Rudin. It continues to be pursued actively, for both theoretical reasons as well as applications to high-dimensional covariance estimation. I will end with some recent joint work with Terence Tao (UCLA).
The entire talk should be accessible given a basic understanding of linear algebra/matrices and one-variable calculus. That said, I will occasionally insert technical details for the more advanced audience. For example: this journey connects many seemingly distant mathematical topics, from Schur (products and complements), to spheres and Gram matrices, to Toeplitz and Hankel matrices, to rank one updates and Rayleigh quotients, to Cauchy-Binet and Jacobi-Trudi identities, back full circle to Schur (polynomials).
We shall consider sampling procedures to construct subgraphs to infer properties of a network. Specifically, we shall consider sampling procedures in the context of dense and sparse graph limits. We will explore open questions and interesting explorations at the undergraduate level. The talk will be accessible to a general audience.
Given a metric space (X, d), there are several notions of it being negatively curved. In this talk, we single out certain consequences of negative curvature – which are themselves weaker than (X, d) being negatively curved – that turn out to be very useful in proving results about holomorphic maps. We shall illustrate what this means by giving a proof of the Wolff-Denjoy theorem. (This theorem says that given a holomorphic self-map f of the open unit disc, either f has a fixed point in the open unit disc or there exists a point p on the unit circle such that ALL orbits under the successive iterates of f approach p.) For most of this talk, we shall focus on metric spaces or on geometry in one complex variable. Towards the end, we shall briefly point out what can be proved in domains in higher dimensions that have the aforementioned negative-curvature-type properties. This part of the talk is joint work with Andrew Zimmer.
I will introduce the concepts of backward, forward, and covariant Lyapunov vectors (that are basically “eigenfunctions” of certain operators) for a dynamical system and discuss their relevance in studying various stability results, both for the system itself and for nonlinear filtering problem, which will also be introduced along the way. The “relevance” to nonlinear filters is in the form of open questions whose answers (as a set of conjectures) may be gleaned from numerical results and linear filtering theory.
Given n random points on a manifold embedded in a Euclidean space, we wish to understand what topological features of the manifold can be inferred from the geometry of these points. One procedure is to consider union of Euclidean balls of radius r around the points and study the topology of this random set (i.e., the union of balls). A more robust method (known as persistent homology) of inferring the topology of the underlying manifold is to study the evolution of topology of the random set as r varies. What topological information (for example, Betti numbers and some generalizations) about the underlying manifold can we glean for different choices of r ? This question along with some partial answers in the recent years will be the focus of the talk. I shall try to keep the talk mostly self-contained assuming only knowledge of basic probability and point-set topology.
The formalism of an “abelian category” is meant to axiomatize the operations of linear algebra. From there, the notion of “derived category” as the category of complexes “upto quasi-isomorphisms” is natural, motivated in part by topology. The formalism of t-structures allows one to construct new abelian categories which are quite useful in practice (giving rise to new cohomology theories like intersection cohomology, for example). In this talk we want to discuss a notion of punctual (=”point-wise”) gluing of t-structures which is possible in the context of algebraic geometry. The essence of the construction is classical and well known, but the new language leads to useful constructions in the motivic world.
Graph-partitioning problems are a central topic of research in the study of algorithms and complexity theory. They are of interest to theoreticians with connections to error correcting codes, sampling algorithms, metric embeddings, among others, and to practitioners, as algorithms for graph partitioning can be used as fundamental building blocks in many applications. One of the central problems studied in this field is the sparsest cut problem, where we want to compute the cut which has the least ratio of number of edges cut to size of smaller side of the cut. This ratio is known as the expansion of the cut. In this talk, I will talk about higher order variants of expansion (i.e. notions of expansion corresponding to partitioning the graph into more than two pieces, etc.), and how they relate to the graph’s eigenvalues. The proofs will also show how to use the graph’s eigenvectors to compute partitions satisfying these bounds. Based on joint works with Prasad Raghavendra, Prasad Tetali and Santosh Vempala.
Understanding the origins of intermittency in turbulence remains one of the most fundamental challenges in applied mathematics. In recent years there has been a fresh attempt to understand this problem through the development of the method of Fourier decimation. In this talk, we will review these recent results and analyse the role of precise numerical simulations in understanding the mathematics of the Navier-Stokes and Euler equations.
It is generally well known that there is an innate notion of things like isomorphisms or epimorphisms. This allows us to talk about isomorphisms or epimorphisms of various objects: groups, rings, algebras, etc. In other words, “isomorphism” is really a categorical notion. However, it is not so well known that finiteness itself is alsocategorical. In this talk, we will discuss how finiteness applies to various categories. This will allow usto see finite sets, finite dimensional vector spaces, finitely generated algebras and compact sets as manifestations of the same basic idea.
In this talk I will present algorithmic applications of an approximate version of Caratheodory’s theorem. The theorem states that given a set of $d$-dimensional vectors $X$, for every vector in the convex hull of $X$ there exists an epsilon-close (under the $p$-norm, for $2 \leq p < \infty$) vector that can be expressed as a convex combination of at most $b$ vectors of $X$, where the bound $b$ depends on epsilon and the norm $p$, and is independent of the ambient dimension $d$. This theorem can be obtained by instantiating Maurey’s lemma (c.f. Pisier 1980/81 and Carl 1985).
I will describe how this approximate version of Caratheodory’s theorem leads novel additive approximation algorithms for finding (i) Nash equilibria in two-player games (ii) dense subgraphs.
A classical theorem in Riemannian geometry asserts that products of compact manifolds cannot admit Riemannian metrics with negative sectional curvature. A fibration (or a fibre bundle) is a natural generalization of a product and hence one can ask if a fibration can admit such a metric. This question is still open and I will discuss it in the context of Kahler manifolds. This leads to the study of graphs of holomorphic motions, which originally arose in complex dynamics. I will sketch a proof that the graph cannot be biholomorphic to a ball or more generally, a strongly pseudoconvex domain.
This talk is about computing (approximately) the “best” map between two polygons taking vertices to vertices. It arises out of a real-life problem, namely, surface registration. Our notion of “best” is extremal quasiconformality (least angle-distortion). I will try to keep the talk as self-contained as possible. It is based on a joint-work with M. Goswami, G. Telang, and X. Gu.
One of the earliest instance of integral geometric discussion can be traced back to Buffon through the “needle problem”. The solution, as we know now, is based on a platform of theory of measures on geometrical spaces which are invariant under some group operations. In this talk, we shall walk through this subject by following a specific line of problems/results called “kinematic fundamental formulae” by studying some specific examples.
Inverse spectral theory in one dimension is to recover a self-adjoint operator given its spectrum and some spectral measure. In general there are lots of self-adjoint operators (of a given form) this collection is called an iso-spectral set. We will give a brief introduction to the questions in this area and also some connections to other areas of mathematics.
I will introduce minimal surfaces and explain the Weierstrass-Enneper representation of a minimal surface using hodographic coordinates. I will mention an interesting link between minimal surfaces and Born-Infeld solitons. If time permits, I will explain my work (with P. Kumar and R.K. Singh) on interpolation of two real analytic curves by a minimal surface of the Bjorling-Schwartz type.
Owing to several applications in large scale learning and vision problems, fast submodular function minimization (SFM) has become a very important problem. Theoretically, unconstrained SFM is polynomial time solvable, however, these algorithms are not practical. In 1976, Wolfe proposed a heuristic for projection onto a polytope, and in 1980, Fujishige showed how Wolfe’s heuristic can be used for SFM. For general submodular functions, this Fujishige-Wolfe heuristic seems to have the best empirical performance. Despite its good practical performance, very little is known about Wolfe’s projection algorithm theoretically.
In this talk, I will describe the first analysis which proves that the heuristic is polynomial time for bounded submodular functions. Our work involves the first convergence analysis of Wolfe’s projection heuristic, and proving a robust version of Fujishige’s reduction theorem.
There exists various possible methods to distribute seats proportionally between states (or parties) in a parliament. In the first half of the talk I will describe some often used methods and discuss their pros and cons (it’s all in the rounding).
One easy method is called Hamilton’s method (also known as the method of largest reminder or Hare’s method). It suffers from a drawback called the Alabama paradox, which e.g. made USA abandon it for the distribution of seats in the house of representatives between states. It is still in use in many other countries including Sweden.
In the second half of the talk I will describe a joint work with Svante Janson (Uppsala Univ.) where we study the probability that the Alabama paradox will happen. We give, under certain assumptions, a closed formula for the probability that the Alabama paradox occurs given the vector $p_1,\dots,p_m$ of relative sizes of the states.
Eigenvalues and eigenvectors appear in many physical and engineering problems, beginning with the noted study by Euler in 1751 of the kinematics of rigid bodies. From a mathematical point of view, for an operator of a suitable class, acting in a vector space, its point spectrum and associated subspaces refer to its “eigenvalues and eigenvectors”, while the subspaces associated with the continuous spectrum of the operator is said to consist of “eigenfunctions”. The basic ideas will be discussed mostly through examples, in some of which a natural connection with the representation of appropriate groups lurks behind.
This talk shall be an introduction to some aspects of Teichmüller theory, which is concerned with the parameter space of marked Riemann surfaces. I shall describe Wolf’s parametrization of Teichmüller space using harmonic maps, and discuss how it extends to the Thurston compactification. If time permits, I will describe some recent joint work with Wolf.
We get useful insight about various algebraic structures by studying their automorphisms and endomorphisms. Here we give a brief introduction to the theory of semigroups of endomorphisms of the algebra of all bounded operators on a Hilbert space.
The usual foundations of mathematics based on Set theory and Predicate calculus (and extended by category theory), while successful in many ways, are so far removed from everyday mathematics that the possibility of translation of theorems to their formal counterparts is generally purely a matter of faith. Homotopy type theory gives alternative foundations for mathematics. These are based on an extension of type theory (from logic and computer science) using an unexpectedly deep connection of Types with Spaces discovered by Voevodsky and Awodey-Warren. As a consequence of this relation we also obtain a synthetic view of homotopy theory. In this lecture, I will give a brief introduction to this young subject.
It is well known that finding an explicit solution for partial differential equations is almost impossible in most of the cases. In this talk we will give a brief introduction to the analysis of certain nonlinear PDEs using various tools from analysis. We will discuss some concrete examples and some open problems (depending on time). The talk should be accessible to senior undergraduate students.
Independent component analysis (ICA) is a basic problem that arises in several areas including signal processing, statistics, and machine learning. In this problem, we are given linear superpositions of signals. E.g., we could be receiving signals from several sensors but the receivers only get the weighted sums of these signals. The problem is to recover the original signals from the superposed data. In some situations this turns out to be possible: the main assumption being that the signals at different sensors are independent random variables. While independent component analysis is a well-studied problem, one version of it was not well-understood, namely when the original signals are allowed to be heavy-tailed, such as those with a Pareto distribution. Such signals do arise in some applications. In this talk, I will first discuss the previously known algorithms for ICA and then a new algorithm that applies also to for the heavy-tailed case. The techniques used are basic linear algebra and probability.
We consider integral lattices $L$ in an Euclidean space $V = \mathbb{R}^m$, i.e. $\mathbb{Z}$-submodules of full rank in $V$ such that all vectors in $L$ have integral length. It is impossible to classify such lattices up to isometry, there are just too many of them in general, even if we fix additional invariants such as the discriminant. Therefore one looks for interesting subclasses of lattices, in particular “extremal lattices”, characterized by the property that the smallest length of a non-zero vector in $L$ is “as large as possible”. There are several ways to make this more precise, we will focus on analytic extremality, where modular forms come in. In particular, we will consider extremality for maximal lattices.
Our vision is unsurpassed by machines because we use a sophisticated object representation. This representation is unlike the retinal image: on the one hand, two out-of-phase checkerboards, maximally different in image pixels, appear perceptually similar. On the other hand, two faces, similar in their image pixels, appear perceptually distinct. What is then the nature of perceptual space? Are there principles governing its organization? To address these questions, we have been using visual search to measure similarity relations between objects.
I will summarize a line of research from our laboratory indicative of a surprising linear rule governing distances in perceptual space. In the first study, we found that search time is inversely proportional to the feature difference between the target and distracters. The reciprocal of search time is therefore linear and interestingly, it behaved like a mathematical distance metric. It also has a straightforward interpretation as a saliency signal that drives visual search (Arun, 2012). In a second study, complex searches involving multiple distracters were explained by a linear sum of pair-wise dissimilarities measured from simpler searches involving homogeneous distracters (Vighneshvel & Arun, 2013). In a third study, dissimilarities between objects differing in multiple features were found to combine linearly. This was also true for integral features such as the length and width of a rectangle upon including aspect ratio as an additional feature (Pramod & Arun, 2014). Finally, I will describe some recent results extending these findings to more naturalistic objects.
Arun and Olson (2010) conducted a visual search experiment where human subjects were asked to identify, as quickly as possible, an oddball image embedded among multiple distractor images. The reciprocal of the search times for identifying the oddball (in humans) and an ad hoc neuronal dissimilarity index, computed from measured neuronal responses to component images (in macaques), showed a remarkable correlation. In this talk, I will describe a model, an active sequential hypothesis testing model, for visual search. The analysis of this model will suggest a natural alternative neuronal dissimilarity index. The correlation between the reciprocal of the search times and the new dissimilarity index continues to be equally high, and has the advantage of being firmly grounded in decision theory. I will end the talk by discussing the many gaps and challenges in our modeling and statistical analysis of visual search. The talk will be based on ongoing work with Nidhin Koshy Vaidhiyan (ECE) and S. P. Arun (CNS).
Weyl discovered a decomposition of the invariant probability measure on a connected compact Lie group, according to elements of a maximal torus and their conjugacy classes. In the case of the unitary group, it allowed Diaconis and Shahshahani to access the distribution of the eigenvalues of a random unitary matrix. We explore these ideas in the lecture.
Consider the problem of n cards labelled 1 through n, lying face up on a table. Suppose two integers a and b are chosen independently and uniformly between 1 and n. The cards labelled a and b are switched. If many such transpositions are made the row of cards will tend to appear in random arrangement. Then question is how many steps are required until the deck is well mixed up (i.e. the permutation is close to random)? Diaconis and Shahshahani used tools of representation theory of symmetric groups to prove that at least 1/2(n log n) steps are required before the deck will be well mixed up for large n. I will explain ideas of their proof and few related problems.
A Kronecker coefficient counts the number of times a representation of a symmetric group occurs in the tensor product of two others. Finding a fast algorithm to determine when a Kronecker coefficient is positive is an open problem. There has been an increased interest in this problem over the last few years as it comes up in the geometric approach to the complexity conjecture P = NP due to Mulmuley and Sohoni. I will explain how a higher dimensional analogue of the Robinson–Schensted–Knuth correspondence relates Kronecker coefficients to the problem of counting the number of integer arrays with specified slice sums.
In the first half of the talk, I will define the dimer model on planar graphs and prove Kasteleyn’s groundbreaking result expressing the partition function (i.e. the generating function) of the model as a Pfaffian. I will then survey various results arising as a consequence, culminating in the beautiful limit shape theorems of Kenyon, Okounkov and coworkers.
In the second half, I will define a variant of the monomer-dimer model on planar graphs and prove that the partition function of this model can be expressed as a determinant. I will use this result to calculate various quantities of interest to statistical physicists and end with some open questions.
simpcomp is an official extension to the computer algebra system GAP, that is, simpcomp is part of every full GAP installation. The software allows the user to compute numerous properties of abstract simplicial complexes such as the f -, g- and h-vectors, the fundamental group, the automorphism group, (co-)homology, the intersection form, and many more. It provides functions to generate simplicial complexes from facet lists, orbit representatives, or permutation groups. In addition, functions related to slicings and polyhedral Morse theory as well as a combinatorial version of algebraic blowups and the possibility to resolve isolated singularities of 4-manifolds are implemented.
In this talk, I will give an overview over the existing features of the software and also discuss some ongoing projects such as (i) support for Forman’s discrete Morse theory for homology and fundamental group computations, and (ii) an extension of the features to analyse tightness of triangulated manifolds.
In this talk we explore new approaches to the old and difficult computational problem of unknot recognition. Although the best known algorithms for this problem run in exponential time, there is increasing evidence that a polynomial time solution might be possible. We outline several promising approaches, in which computational geometry, linear programming and greedy algorithms all play starring roles.
We finish with a new algorithm that combines techniques from topology and combinatorial optimisation, which is the first to exhibit “real world” polynomial time behaviour: although it is still exponential time in theory, exhaustive experimentation shows that this algorithm can solve unknot recognition for “practical” inputs by running just a linear number of linear programs.
Abstract: See https://drive.google.com/file/d/0Bx6ccM3uL81GNHZqQUtic3g5TndEMmF3Y0JKSDl3LTBTeFU4/view?usp=sharing.
The goal of these talks is to discuss the history, the circumstances, and (the main aspects of) the proof of a recent result of ours, which says: a proper holomorphic map between two nonplanar bounded symmetric domains of the same dimension, one of them being irreducible, is a biholomorphism.
Results of this character are motivated by a remarkable theorem of H. Alexander about proper holomorphic self-maps of the Euclidean ball. The first part of the talk will focus on the meanings of the terms in the above theorem. We will give an elementary proof that the automorphism group of a bounded symmetric domain acts transitively on it. But the existence of an involutive symmetry at each point is restrictive in many other ways. For instance, it yields a complete classification of the bounded symmetric domains. This follows from the work of Elie Cartan.
Earlier results in the literature that are special cases of our result focused on different classes of domains in the Cartan classification, and have mutually irreconcilable proofs. One aspect of our work is a unified proof: a set of techniques that works across the Cartan classification. One of those techniques is the use of an algebraic gadget: the machinery of Jordan triple systems.
We will present definitions and examples related to this machinery in the first part of the talk. In the second part of the talk, we shall state a result on finite-dimensional Jordan triple systems that is essential to our work. After this, we shall discuss our proof in greater depth.
Given n non-zero real numbers v_1,…,v_n, what is the maximum possible number of subsets {i_1,…,i_k} that can have the same subset sum v_{i_1}+…+v_{i_n}? Littlewood and Offord raised the question, showed that the answer is at most 2^nlog(n)/\sqrt{n} and conjectured that the log(n) can be removed. Erdos proved the conjecture and raised more questions that have continued to attract attention, primarily relating the arithmetic structure of v_1,…,v_n and the maximum number of subsets with a given subset-sum.
In the first lecture, we shall review and prove several of these results (due to Littlewood–Offord, Erdos, Moser, Stanley, Halasz, Tao–Vu, Rudelson–Vershynin…) and show an application to random matrices.
In the second part, we start with a question in random polynomials and see how it leads to a variant of the Littlewood–Offord problem that has not been considered before.
Most of the material presented should be accessible to undergraduate students. Much of the lecture is an outcome of my joint study in summer with Sourav Sarkar (ISI, Kolkata).
Motivated by applications, we introduce a class of optimization problems with constraints. Difficulties in solving these problems are highlighted. Mathematical developments to overcome these difficulties are discussed.
I will discuss the negative curvature properties of certain intrinsic metrics in complex analysis. The talk will be accessible to graduate students.
A great progress on the twin prime problem was made last year by Yitang Zhang and it made him famous in mathematical circles almost overnight. He proved the existence of a positive real number M such that there are infinitely many pairs of primes that differ by at most M. The twin prime conjecture predicts that M can be taken to be two. I shall give an overview of the works leading up to Zhang’s spectacular achievement.
Quadrature domains are those on which holomorphic functions satisfy a generalized mean value equality. The purpose of this talk will be to reflect on the Schwarz reflection principle and to understand how it leads to examples of quadrature domains.
In this talk we address the problem of integrating entire functions (of several complex variables) in ‘polar coordinates’!
Wolfgang Kuhnel introduced the notion of tightness and conjectured that all tight triangulations of closed manifolds are strongly minimal. In a recent paper with B. Datta (European J. Comb. 36 (2014), 294–313), the speaker obtained a very general combinatorial criterion for tightness and found results in partial confirmation of this conjecture. We shall discuss these results.
Given a L1 function f over an infinite measure space S how does one estimate the integral of f using statistical tools? In this talk we propose using regenerative sequences with heavy tails to do this. We obtain a consistent estimator and show the rate of convergence is slower than $1/\sqrt{n}$. When S is countable the SSRW works.
I will talk about techniques to approximate real functions such as $x^s,$ $x^{-1}$ and $e^{-x}$ by simpler functions and how these results can be used in the design of fast algorithms. The key lies in the fact that such approximations imply faster ways to approximate primitives such as $A^sv,$ $A^{-1}v$ and $\exp({-A})v$, and in the computation of matrix eigenvalues and eigenvectors. Indeed, many fast algorithms reduce to the computation of such primitives, which have proved useful for speeding up several fundamental computations, such as random walk simulation, graph partitioning, solving linear systems of equations, and combinatorial approaches for solving semi-definite programs.