2024 – 2025 Academic Year
Organized by: Albert Kunhui Luan (kunhui.luan@sc.edu) & Zhaoqing Xu (zhaoqing@email.sc.edu)
This page will be updated as new seminars are scheduled. Make sure to check back each week for information on upcoming seminars.
When: Monday, December 2, 2024, from 2:30 - 3:30 p.m.
Speaker: Zhonggan Huang (University of Utah)
Location: Virtual via Zoom
Abstract: TBA
When: Friday, November 15, 2024, from 3:40 - 4:30 p.m.
Speaker: Qiyu "Lily" Wu (University of South Carolina)
Location: LeConte 440 (Synchronous via Zoom)
Abstract: Convergence and convergence rate are critical aspects of optimization. However, their analysis can be challenging when approached only from an optimization perspective. Alternatively, through the angle of ODEs, we can apply the Lyapunov function to study the dissipation of energy, which in turn provides a more profound insight on the optimization method’s convergence rate. Furthermore, this ODE perspective provides with a scope on why some optimization methods can perform better in specific situations. During the talk, we will use the Nesterov Method as an example for our analysis.
When: Monday, November 11, 2024, from 2:30 - 3:30 p.m.
Speaker: Ovidiu-Neculai Avadanei (University of California, Berkeley)
Location: Virtual via Zoom
Abstract: We consider the well-posedness of the generalized surface quasi-geostrophic (gSQG) front equation. By making use of the null structure of the equation, we carry out a paradifferential normal form analysis in order to obtain balanced energy estimates, which allows us to prove the local well-posedness of the g-SQG front equation in the non-periodic case at a low level of regularity (in the SQG case, this is only one half of a derivative above scaling). In addition, we establish global well-posedness for small and localized rough initial data, as well as modified scattering, by using the testing by wave packet approach of Ifrim-Tataru. This is joint work with Albert Ai .
When: Friday, November 8 2024, from 2:30 - 3:30 p.m.
Speaker: Jianguo Hou (University of South Carolina)
Location: Virtual via Zoom
Abstract: This talk will present an introduction to Graph Neural Networks (GNNs), with a primary focus on Graph Convolutional Networks (GCNs). We will begin by examining spectral domain graph analysis, emphasizing the role of graph Laplacians and spectral graph convolutions in representing and processing graph-structured data. Specifically, we will explore why GCNs can be viewed as graph spectral filters, capturing crucial frequency components of graph signals. The seminar will conclude with a brief overview of spatial domain approaches in GNNs, demonstrating how neighborhood-based aggregation provides a complementary method for learning on graphs.
When: Monday, October 21, 2024, from 2:30 - 3:30 p.m.
Speaker: Yang Chu (University of California, Berkeley)
Location: Virtual via Zoom
Abstract: The concave majorant K of a one dimensional Brownian motion B is the minimum concave function which dominates B. Initiated by Groeneboom, there is a rich literature on concave majorant/convex minorant of random walk, Brownian motion and Lévy processes.
I will begin with an overview of literature and then discuss a recent conjecture: in light of the famous 2M-B theorem of Pitman, it was recently conjectured by Ouaki and Pitman that 2K-B has the same law as the BES(5) process.
While the two processes are similar in many ways, we show that this conjecture is false. In particular, we show that under conditioning on “pinning at a point at time infinity”, 2K-B behaves essentially as a mixture of BES(3). In this conditioning, we derive a path decomposition of 2K-B similar to the one of Williams, and other properties such as multiple points distribution, which are intractable in the original 2K-B as pointed out by Ouaki-Pitman. As a byproduct, we characterize how a mixture of BES(3) would, locally at time 0, behave like another nonsingular one dimensional diffusion, which could be considered as a partial converse of William’s path decomposition. Joint work with Lingfu Zhang.
When: Monday, October 14, from 2:30 - 3:30 p.m.
Speaker: Shiwen Yang (Boston University)
Location: Virtual via Zoom
Abstract: Attractor-Based Coevolving Dot Product Random Graph Model -- Detecting Polarization Behavior of Networks We introduce the attractor-based coevolving dot product random graph model (ABCDPRGM) to analyze time-series network data manifesting polarizing or flocking behavior. Graphs are generated based on latent positions under the random dot product graph regime. We assign group membership to each node. When evolving through time, the latent position of each node will change based on its current position and two attractors, which are defined to be the centers of the latent positions of all of its neighbors who share its group membership or who have different group membership than it. Parameters are assigned to the attractors to quantify the amount of influence that the attractors have on the trajectory of the latent position of each node. We developed estimators for the parameters, demonstrated their consistency, and established convergence rates under specific assumptions. Through the ABCDPRGM, we provided a novel framework for quantifying and understanding the underlying forces influencing the polarizing or flocking behaviors in dynamic network data.
When: Friday, October 4, 2024, from 2:30 - 3:30 p.m.
Speaker: Dongwei Chen (Colorado State)
Location: Virtual via Zoom
Abstract: In this talk, I will present my latest work on the approximation in reproducing kernel Hilbert spaces. We generalize the least square method to probabilistic approximation in reproducing kernel Hilbert spaces, and show the existence and uniqueness of the optimizer.
Furthermore, we generalize the celebrated representer theorem in this setting, and particularly when the probability measure is finitely supported or the Hilbert space is finite-dimensional, we show that the approximation problem turns into a measure quantization problem.
Some discussions and examples are also given for cases where the space is infinite-dimensional and the measure is infinitely supported. This is joint work with Kai-Hsiang Wang from Northwestern University.
Previous Years
Organized by: McKenzie Black, Thomas Hamori, Chunyan Li
Note: Due to the COVID-19 pandemic, we are currently leaving the format of the seminar up to each individual speaker. To make the seminar as accessible as possible, we will host a zoom for each in-person presentation live so that anyone who can't or would prefer not to attend in person can still participate.
- February 25th
- 1:00 pm
Abstract: Partial differential equations are often used to model various physical phenomena, such as heat diffusion, wave propagation, fluid dynamics, elasticity, electrodynamics and so on. Due to their important applications in scientific research and engineering, many numerical methods have been developed in the past decades for efficient and accurate solutions of these equations. Inspired by the rapidly growing impact of deep learning techniques, we propose in this paper a novel neural network method, “GF-Net”, for learning the Green’s functions of the classic linear reaction-diffusion equations in the unsupervised fashion. The proposed method overcomes the challenges for finding the Green’s functions of the equations on arbitrary domains by utilizing the physics-informed neural network approach and domain decomposition. Consequently, it particularly leads to a fast algorithm for solving the target equations subject to various sources and Dirichlet boundary conditions without network retraining. We also numerically demonstrate the effectiveness of the proposed method by extensive experiments in the square, annular and L-shape domains.
Chunyan Li, University of South Carolina
- February 25th
- 1:00 pm
Abstract: In this talk, we will introduce a nonlinear dimensionality reduction method with neural networks, called VAE. Two parameterized conditional distributions are learned as the encoder and decoder by minimizing the so called variational lower bound objective in VAE. We will go through the derivation and reparameterization trick used in this whole process. Applications will be shown in the end.
Zongyi Li, California Institute of Technology
- February 11th
- 1:00 pm
Abstract: The classical development of neural networks has primarily focused on learning mappings between finite dimensional Euclidean spaces or finite sets. We propose a generalization of neural networks tailored to learn operators mapping between infinite dimensional function spaces. We formulate the approximation of operators by composition of a class of linear integral operators and nonlinear activation functions, so that the composed operator can approximate complex nonlinear operators. We prove a universal approximation theorem for our construction. Furthermore, we introduce four classes of operator parameterizations: graph-based operators, low-rank operators, multipole graph-based operators, and Fourier operators and describe efficient algorithms for computing with each one. The proposed neural operators are resolution-invariant: they share the same network parameters between different discretizations of the underlying function spaces and can be used for zero-shot super-resolutions. Numerically, the proposed models show superior performance compared to existing machine learning based methodologies on Burgers' equation, Darcy flow, and the Navier-Stokes equation, while being several order of magnitude faster compared to conventional PDE solvers.
Yunkai Teng, University of South Carolina
- October 29th
- 12:00 pm
Abstract: Level Set Learning and Function Approximations on Sparse Data through Pseudo-reversible Neural Network
Chunyan Li, University of South Carolina
- October 15th
- 12:00 pm
Abstract: PCA, one of the popular dimensionality reduction methods, is an orthogonal linear transformation that transforms the data to a new coordinate system. In this talk, we will learn how to derive this new basis and characterize the structure of all principal components via SVD of covariance matrix of data. The variant of PCA, Dual PCA and Kernel PCA are mentioned as well.
McKenzie Black, University of South Carolina
- October 1st
- 12:00 pm
Abstract: In this talk, we will introduce the Pressure-less Euler Alignment system and update the system with nonlinear velocity. We explore local well posedness of the system while discussing varying method to get there. Focusing on the nonlinear velocity, we introduce a similar system to determine how the magnitude of nonlinearity effects unconditional flocking and all subsets to follow.
Thomas Hamori, University of South Carolina
- September 24th
- 12:00 pm
Abstract: Conservation laws are foundational in fluid dynamics. I will derive conservation laws for traffic flow from conservation of mass for macroscopic traffic flow models. A brief discussion will follow regarding the classical theory for macroscopic traffic flow, and I will present joint work with my advisor Dr. Changhui Tan on a class of nonlocal traffic models. In these models, the nonlocality is used to combat the nonlinearity of the PDE. I will show that the nonlocality broadens the class of initial conditions with global smooth solutions for these models.