## Applied Math Seminar, Spring 2017

Organized by Shan Zhao

## Time: 3:30 - 4:30 pm, Fridays

Location: 228 Gordon Palmer Hall, Department of Mathematics, University of Alabama

**January 20, 2017****Hristo Sendov**

Department of Statistical and Actuarial Sciences, University of Western Ontario**Title:**Stronger Rolle's Theorem for Complex Polynomials**Abstract****January 27, 2017****Shan Zhao**

Department of Mathematics, University of Alabama**Title:**On developing stable finite element methods for pseudo-time simulation of biomolecular electrostatics**Abstract:**The Poisson-Boltzmann Equation (PBE) is a widely used implicit solvent model for the electrostatic analysis of solvated biomolecules. To address the exponential nonlinearity of the PBE, a pseudo-time approach has been developed in the literature, which completely suppresses the nonlinear instability through an analytic integration in a time splitting framework. This work aims to develop novel Finite Element Methods (FEMs) in this pseudo-time framework for solving the PBE. Two treatments to the singular charge sources are investigated, one directly applies the definition of the delta function in the variational formulation and the other avoids numerical approximation of the delta function by using a regularization formulation. To apply the proposed FEMs for both PBE and regularized PBE in real protein systems, a new tetrahedral mesh generator based on the minimal molecular surface definition is developed. Numerical experiments of several benchmark examples and free energy calculations of protein systems are conducted to demonstrate the stability, accuracy, and robustness of the proposed PBE solvers. This is a joint work with Weishan Deng and Jin Xu (Institute of Software, CAS, China).

**February 3, 2017****Roger Sidje**

Department of Mathematics, University of Alabama**Title:**Inexact chemical master equation via tensor reduction methods**Abstract:**In a biological cell, the chemical master equation (CME) that models biochemical reactions with multiple species and large copy numbers of these species is a significant challenge due to the enormous size of the state space. This so-called curse of dimensionality has been a barrier for traditional approaches aimed at directly solving the CME. However, recent research based on tensor reduction methods is showing promise owing to the ability to cope much better with higher dimensional problems. We build on insights from our framework of inexact Krylov methods and report early results on some problems that had been challenging to traditional approaches.**March 10, 2017****Zhihan Wei**

Department of Mathematics, University of Alabama**Title:**A spatially second order alternating direction implicit (ADI) method for solving three dimensional parabolic interface problem**Abstract:**A new matched alternating direction implicit (ADI) method is proposed in this presentation for solving three-dimensional (3D) parabolic interface problems with

discontinuous jumps and complex interfaces. In time discretization, this method is found to be first order and unconditionally stable in numerical experiments. In space discretization, it achieves a second order of accuracy based on simple Cartesian grids for various irregularly-shaped surfaces and spatial-temporal dependent jumps. Computationally, the matched ADI method is as efficient as the fastest implicit scheme based on the geometrical multigrid for solving 3D parabolic equations, in the sense that its complexity in each time step scales linearly with respect to the spatial degree of freedom N, i.e., O(N). Furthermore, unlike iterative methods, the ADI method is an exact or non-iterative algebraic solver which guarantees to stop after a certain number of computations for a fixed N. Therefore, the proposed matched ADI method provides a very promising tool for solving 3D parabolic interface problems.**March 24, 2017****Xiaowen Wang**

Department of Aerospace Engineering and Mechanics, University of Alabama**Title:**High-Order Shock-Fitting Solvers and Numerical Simulations of Hypersonic Flows**Abstract:**High-order numerical simulations of hypersonic/non-equilibrium flows have received considerable attention in the research community in the past two decades because of their potential in delivering higher accuracy. In this talk, I will first summarize my achievements in developing of high-order shock-fitting solvers for both perfect gas flows and high-enthalpy non-equilibrium flows. These solvers have been successfully applied to hypersonic flow simulations including boundary-layer stability and transition control, and shock-turbulence interaction. As an example, I will talk about the patent for a new control strategy of laminar hypersonic flow by using discrete and/or continuous surface roughness. Finally, I will go through future directions of high-order numerical simulations related to space and astronautics applications such as thermal protection system, pulse detonation engine, and environmental challenges inherent in future high speed vehicles.**Biosketch:**Dr. Xiaowen Wang is an assistant professor at the Aerospace Engineering and Mechanics department. Before joining UA, he was a Senior Scientist at Ohio Aerospace Institute and working as a contractor at Wright-Patterson Air Force Base. He holds a Ph.D. degree in Mechanical Engineering from University of California at Los Angeles, a M.S. degree in Fluid Mechanics and a B.S. degree in Theoretical and Applied Mechanics from University of Science and Technology of China. Dr. Wang is an Associate Fellow of the American Institute of Aeronautics and Astronautics (AIAA) and a member of the AIAA Thermophysics Technical Committee. His main research interests are high-order numerical methods, hypersonic boundary-layer stability and transition control, and thermochemical non-equilibrium flows.

**March 31, 2017****Yuyuan "Lance" Ouyang**

Department of Mathematical Sciences, Clemson University**Title:**First-order methods for structured convex optimization**Abstract:**First-order optimization methods have drawn more and more attentions in recent years, due to their efficiency in solving large scale optimization problems that arise from big data models. Many big data analysis problems can be formulated as composite convex optimization, in which the model to be solved consists of a loss function that describes data fidelity, and a regularization term that enforces structural properties of the computed solution (e.g., total variation, low rank tensor, overlapped group lasso, graph regularization, etc.). In order to design efficient first-order methods, it is important to utilize the structure of both the fidelity and regularization terms in the objective function. In this talk, I will present some first-order methods that explore the structures of large optimization problems. Examples of useful special structures include smoothness of loss functions, saddle-point reformulation for regularization terms, and variational inequalities reformulation of the objective function.**April 7, 2017****Gary C. Cheng**

Department of Aerospace Engineering and Mechanics, University of Alabama**Title:**Space-Time CESE Method -- Current Status and Future**Abstract:**With the advance of computer hardware and numerical methodologies, computational fluid dynamics (CFD) has become a popular engineering tool for analyzing a wide range of fluid flow problems. While routine computations are being performed regularly for qualitative assessment and gross parametric studies, the trend is moving towards higher-fidelity computations for problems involving strong transient effects and relatively complex geometries as well as physics, such as aeroacoustics, shock-boundary layer interaction, fluid-structure interaction, electromagnetic waves, etc. Unfortunately, the established CFD methods treat the flow dependency of space and time separately, and require that higher-order discretization schemes be employed in the temporal and spatial directions to capture the transient flow phenomena. The use of higher-order schemes generally leads to more numerical diffusion in simulating complex flow structures such as interaction between acoustic waves and shocks. In addition, for most established numerical schemes evaluation of convective flux is constructed using a characteristics-based approximate solution to the Riemann problem, which is fundamentally one dimensional and its extension into multiple dimensions has not been mathematically proved. A space-time conservation-element solution-element (CESE) method, developed by Dr. S.-C. Chang of NASA GRC, implements the coupled treatment of the flow dependency on space and time, which is considered to be physically correct and necessary for modeling transient flows. This numerical method is genuine multi-dimensional and second-order accurate in both space and time that is capable of offering remarkable accuracy in resolving flow discontinuities and unsteady waves simultaneously. In this seminar, the numerical approach of the CESE method will be introduced, the results obtained from some applications (both inviscid and viscous flows) of this method will be presented, and the strengths, weaknesses, as well as its future development will be discussed.**April 14, 2017****Brendan Ames**

Department of Mathematics, University of Alabama**Title:**How to reliably find a hidden clique**Abstract:**The clique problem is a classical problem in combinatorics: given a graph G and positive integer k, determine if G contains a complete subgraph on k vertices. Although this problem is known to be NP-hard, we’ll discover that the problem is solvable in polynomial time for a special class of problem instances. In particular, we can recover a sufficiently large clique hidden by noise from the solution of a particular convex relaxation. Finally, we’ll show that this phase transition to perfect recovery of the hidden clique depends on the amount of noise present in the graph, and establish significantly improved recovery guarantees in the presence of sparse noise.

**May 26, 2017**(11am at 346 GP)**Shibin Dai**

Department of Mathematical Sciences, New Mexico State University**Title:**Phase-Field Free Energy and Boundary Force for Molecular Solvation**Abstract:**We discuss a phase-filed variational model for the solvation of charged molecules with implicit solvent. The solvation free-energy functional of all phase fields consists of the surface energy, solute excluded volume and solute-solvent van der Waals dispersion energy, and electrostatic free energy. The last part is defined through the electrostatic potential governed by the Poisson-Boltzmann equation in which the dielectric coefficient is defined through a phase field. We prove Gamma- convergence of the phase field free-energy functional to its sharp-interface limit. We also define the dielectric boundary force for any phase field as the negative first variation of the free-energy functional, and prove the convergence of such force to the corresponding sharp-interface limit.

## Applied Math Seminar, Fall 2016

Organized by Shan Zhao

## Time: 10 - 11 am, Fridays

Location: 230 Gordon Palmer Hall, Department of Mathematics, University of Alabama

**September 2, 2016 (Colloquium of Mathematics Department)**

Heping Zhang

Susan Dwight Bliss Professor of Biostatistics, School of Public Health, Yale University**Title:**Statistical Strategies in Analyzing Data with Unequal Prior Knowledge**Abstract:**The advent of technologies including high throughput genotyping and computer information technologies has produced ever large and diverse databases that are potentially information rich. This creates the need to develop statistical strategies that have a sound mathematical foundation and are computationally feasible and reliable. In statistics, we commonly deal with relationship between variables using correlation and regression models. With diverse databases, the quality of the variables may vary and we may know more about some variables than the others. I will present some ideas on how to conduct statistical inference with unequal prior knowledge. Specifically how do we define correlation between two sets of random variables conditional on a third set of random variables and how do we select predictors when we have information from sources other than the databases with raw data? I will address some mathematical and computational challenges in order to answer these questions. Analysis of real genomic data will be presented to support the proposed methods and highlight remaining challenges.

**September 9, 2016**

Yangyang Xu

Department of Mathematics, University of Alabama**Title:**ARock: asynchronous parallel coordinate update**Abstract:**The problem of finding a fixed point to a nonexpansive operator is an abstraction of many models in numerical linear algebra, optimization, and other areas of scientific computing. To solve this problem, we propose ARock, an asynchronous parallel algorithmic framework, in which a set of agents (machines, processors, or cores) update randomly selected coordinates of the unknown variable in an asynchronous parallel fashion. The resulting algorithms are not affected by load imbalance. When the coordinate updates are atomic, the algorithms are free of memory locks. We show that if the non-expansive operator has a fixed point, then with probability one, the sequence of points generated by ARock converges to a fixed point of the operator. Stronger convergence properties such as linear convergence are obtained under stronger conditions. As special cases of ARock, novel algorithms for linear systems, convex optimization, machine learning, distributed and decentralized optimization are introduced with provable convergence. Very promising numerical performance of ARock has been observed. We present the numerical results of solving sparse logistic regression problems.

**September 16, 2016**

Zhe Jiang

Department of Computer Sicence, University of Alabama**Title:**Spatial Big Data Analytics: Classification Techniques for Earth Observation Imagery**Abstract:**Spatial Big Data (SBD), e.g., earth observation imagery, GPS trajectories, temporally detailed road networks, etc., refers to geo-referenced data whose volume, velocity, and variety exceed the capability of current spatial computing platforms. SBD has the potential to transform our society. Vehicle GPS trajectories together with engine measurement data provide a new way to recommend environmentally friendly routes. Satellite and airborne earth observation imagery plays a crucial role in hurricane tracking, crop yield prediction, and global water management. The potential value of earth observation data is so significant that the White House recently declared that full utilization of this data is one of the nation's highest priorities. However, SBD poses significant challenges to current big data analytics. In addition to its huge dataset size (NASA collects petabytes of earth images every year), SBD exhibits four unique properties related to the nature of spatial data that must be accounted for in any data analysis. First, SBD exhibits spatial autocorrelation effects. In other words, we cannot assume that nearby samples are statistically independent. Current analytics techniques that ignore spatial autocorrelation often perform poorly such as low prediction accuracy and salt-and-pepper noise (i.e., pixels predicted as different from neighbors by mistake). Second, spatial interactions are not isotropic and vary across directions. Third, spatial dependency exists in multiple spatial scales. Finally, spatial big data exhibits heterogeneity, i.e., samples with identical feature values may belong to different class labels in different regions. Thus, predictive models learned globally may perform poorly in many local regions.

This talk investigates novel SBD analytic techniques to address some of these challenges. To address the challenges of spatial autocorrelation and anisotropy, we introduce novel spatial classification models such as spatial decision trees for raster SBD (e.g., earth observation imagery). To scale up the proposed models, efficient learning algorithms via computational pruning are developed. The proposed techniques have been applied to real world remote sensing imagery for wetland mapping. We will also introduce spatial ensemble learning framework to address the challenge of spatial heterogeneity, particularly the class ambiguity issues in geographical classification, i.e., samples with the same feature values belong to different classes in different spatial zones. Evaluations on three real world remote sensing datasets confirmed that proposed spatial ensemble learning outperforms current approaches such as bagging, boosting, and mixture of experts when class ambiguity exists. The talk will conclude with future research directions.**Biography:**Dr. Zhe Jiang is currently an assistant professor in computer science at University of Alabama, Tuscaloosa. He received his Ph.D. in computer science from the University of Minnesota, Twin Cities, in 2016. Prior to that, Zhe received his B.E. degree in electrical engineering from the University of Science and Technology of China in 2010. His research interests include spatial big data analytics, spatial and spatiotemporal data mining, spatial database, geographic information system, as well as their interdisciplinary applications in climate science, natural resource management, environmental science, transportation, public safety, public health, etc.

**September 23, 2016**

Stavros Belbas

Department of Mathematics, University of Alabama**Title:**Modeling and control via spatiotemporal functional series**Abstract:**The well-known Wiener-Voletrra functional series has been applied to problems involving temporal signals (the independent variable is time). Here, we present a more general model that involves also spatial independent variables and various forms of discontinuities and singularities. A relevant variational calculus is also developed.

**September 30, 2016**

Mengpu Chen

Department of Mathematics, University of Alabama**Title:**Augmented Lagrangian method for Euler's elastica based variational models**Abstract:**Euler's elastica is widely applied in digital image processing. It is very challenging to minimize the Euler's elastica energy functional due to the high-order derivative of the curvature term. The computational cost is high when using traditional time-marching methods. Hence developments of fast methods are necessary. In the literature, the augmented Lagrangian method (ALM) is used to solve the minimization problem of the Euler's elastica functional by Tai, Hahn and Chung and is proven to be more efficient than the gradient descent method. However, several auxiliary variables are introduced as relaxations, which means people need to deal with more penalty parameters and much effort should be made to choose optimal parameters. In this dissertation, we employ a novel technique by Bae, Tai, and Zhu, which treats curvature dependent functionals using ALM with fewer Lagrange multipliers, and apply it for a wide range of imaging tasks, including image denoising, image inpainting, image zooming, and image deblurring. Numerical experiments demonstrate the efficiency of the proposed algorithm. Besides this, numerical experiments also show that our algorithm gives better results with higher SNR/PSNR, and is more convenient for people to choose optimal parameters.

**October 7, 2016**

Christopher Wanstall

Department of Mathematics, University of Alabama**Title:**A step function density profile model for the convective stability of CO2 geological sequestration**Abstract:**The convective stability associated with carbon sequestration is usually investigated by adopting an unsteady, diffusive basic profile to account for the space and time development of the carbon saturated boundary layer’s instabilities. Due to the time dependence of the nonlinear base profile, the instability threshold conditions are expressed as critical times at which the boundary layer instability sets in. This paper adopts an unstably stratified basic profile having a step function density with top heavy carbon saturated layer overlying a lighter carbon free layer. The resulting configuration resembles that of the Rayleigh-Taylor problem with the exception that there is no free interface separating the two layers and that buoyancy diffusion takes place between the layers. We consider a model that takes into account the anisotropy in both permeability and carbon dioxide diffusion, and chemical reactions between the CO2 rich brine and host mineralogy. We carry out a linear stability analysis to derive the instability threshold parameters for two sets of boundary conditions. First, an upper boundary that is perfectly permeable and a lower boundary that is impervious to mass flow. Second, we consider an upper boundary that is nearly impermeable and a lower boundary that is impervious to mass flow. In each case, the base state consists of a heavy carbon-rich brine layer overlying a lighter carbon-free layer separated by a horizontal non-free interface. We solve for the minimum thickness of the carbon-rich layer at which convection sets in and quantify how its value is influenced by diffusion anisotropy, permeability, reaction and type of boundary conditions. The step function density profile is found to yield convective iso-concentration contours that are non-smooth and have a tongue-like shape. We quantify the non-smoothness property by deriving expressions for the CO2 flux at the interface. The linear problem corresponding to the second set of boundary conditions is extended to the nonlinear regime, the analysis of which leads to the determination of a uniformly valid super critical steady solution.

**October 14, 2016**

Burcu B. Keskin

Information Systems, Statistics , and Management Science, University of Alabama**Title:**Sourcing Strategies Under Supply and Demand Uncertainty**Abstract:**Supply chain risk management receives increasing attention due to the complexities arising from shorter product life cycles, higher customer expectations, and increasing dependencies among supply chain entities. We study the optimal use of downward substitution and multi-sourcing for a capacitated firm subject to supply and demand risk. We study a capacitated firm that is subject to supply and demand uncertainty. The firm sells two products that differ in the quality of one component, i.e., high and low quality variants of the product. The firm procures the higher quality component from a single, perfectly reliable supplier. For the lower quality component, two suppliers are available. One supplier is expensive, but perfectly reliable while the other is cheaper, but unreliable. Specifically, with a positive probability, the cheaper supplier fails to deliver an order, which we refer to as a disruption. In addition to dual sourcing, the firm may also substitute units of the higher quality component in place of the lower quality component when needed, i.e., downward substitution. The downward substitution capability also permits the firm to use some amount of the higher quality component as hedge inventory, used only in the event of a disruption. The supply risk arises from the unreliability of a component supplier who fails to deliver with some positive probability. To buffer against the supply unreliability, the firm may choose to source from a more expensive, but perfectly reliable supplier, or substitute a higher quality component (i.e., downward substitution). The downward substitution capability permits the firm to use the higher quality component to hedge against a supply disruption of the lower quality component.

In this work, we characterize the optimal role of downward substitution and dual sourcing in mitigating supply and demand risk via an exact analysis for a limited capacity setting. Specifically, we develop a mathematical model for a two-product setting and analyze the first-order conditions to identify situations where various combinations of the sourcing strategies are optimal. Interestingly, when capacity is limited, we show that an i) optimal strategy may employ downward substitution, even when no disruption occurs; and ii) contrary to the known results from the literature, an optimal sourcing strategy may sole source from the reliable, and more expensive supplier without an order from the unreliable, but cheaper supplier. The former result shows that decision makers may use downward substitution as not only a reactive tactic but also a means to achieve higher utilization in a capacitated environment with supply uncertainty.

This is a joint work with Nick Freeman (Univ. Houston), Sharif Melouk (UA) and John Mittenthal (UA).

**October 21, 2016**

Sagy Cohen

Department of Geography, University of Alabama**Title:**Research at the Surface Dynamics Modeling Lab: from global-scale and long-term riverine modeling to local-scale remote sensing and simulations of flood events**Abstract****November 4, 2016**

Xin Luo

Department of Mathematics, University of Alabama**Title:**Development of Modal Interval Algorithm for Solving Continuous Minimax Problems**Abstract:**While there are a large variety of effective methods developed for solving more traditional minimization problems, much less success has been reported in solving the minimax problem. Continuous minimax problems can be applied to engineering, finance and other fields. Sainz, M.A. in 2008 proposed a modal interval algorithm based on his semantic extensions to solve continuous minimax problems. We developed an improved algorithm using modal intervals to solve unconstrained continuous minimax problems. A new interval method is introduced by taking advantage of both the original minimax problem and its dual problem (called maximin problem). The new algorithm is implemented in the framework of uniform partition of the search domain. Various improvement techniques including more bisecting choices, sampling methods and deletion conditions are applied to make the new method more powerful. Preliminary numerical results provide promising evidence of its effectiveness.**November 11, 2016**

Bo Zhang

Department of Geological Sciences, University of Alabama**Title:**Automatic Chronostratigraphy from 3D Seismic Image**Abstract:**Chronostratigraphy analysis is one of the important routines for oil and gas exploration. Horizons interpretation is the key for successfully structure model building and chronostratigraphy interpretation. Usually horizon interpretation is performed by human identifying the reflection events on 3D seismic image and it is time consuming task. In this presentation, we propose methods to construct the seismic horizons aligned reflectors in 3D seismic image. We first determine the dips of reflectors through the structure tensor of 3D seismic image. We use multiple sets of control points to generate a more accurate horizon volumes from 3D seismic image. The constraints are implemented through preconditioners in the conjugate gradient algorithm.

**November 18, 2016**

Jun Ma

Department of Economics, Finance and Legal Studies, University of Alabama**Title:**The Impact of EMU on Bond Yield Convergence: Evidence from a Time-Varying Dynamic Factor Model**Abstract:**This paper examines the role of the EMU in explaining observed changes in sovereign bond yield. Using monthly data on long term government bond yields for the period 1993:01 through 2015:05 for 21 OECD countries, we decompose the bond yield changes into a global factor, two regional factors (EMU and non-EMU countries), and idiosyncratic country specific factors and estimate a dynamic factor model with time varying parameter and stochastic volatility a la Del Negro and Otrok (2008). We find that before the financial crisis the global factor dominated other factors in terms of explaining the observed bond yields changes for most countries in our sample. In the post funancial crisis period there is substantial heterogeneity in the relative importance of the EMU and the idiosyncratic factors across different countries. The EMU factor consistently played a dominant role in explaining bond yield changes in Italy and Spain, whereas its dominance was intermittent in the case of Greece. For Greece, in the post financial crisis period the country specific factor emerged as the dominant factor driving the bond yield dynamics. We also find that the EMU share in bond yield changes in Ireland and Portugal has increased significantly since 2012.

## Applied Math Seminar, Spring 2016

Organized by Shan Zhao

## Time: 3:30 - 4:30 pm, Fridays

Location: 228 Gordon Palmer Hall, Department of Mathematics, University of Alabama

**January 29, 2016**

David Cruz-Uribe

Department of Mathematics, University of Alabama**Title:**Variable Lebesgue spaces: theory and applications**Abstract****February 5, 2016**

Weihua Su

Department of Aerospace Engineering and Mechanics, University of Alabama**Title:**Low-Order Computational Modeling for Nonlinear Aeroelasticity of Highly Flexible Aircraft**Abstract:**High-altitude, long-endurance (HALE) aircraft may be used for various missions, including environmental sensing, telecom relay, and military reconnaissance. These aircraft feature long and slender wings, which may undergo large deformations during normal operating loads, exhibiting geometrically nonlinear behavior. Because of this inherently high flexibility, traditional linear theories do not provide accurate estimations on HALE aircraft aeroelastic characteristics. A methodology that can effectively model and analyze nonlinear aeroelasticity of these very flexible aircraft will be addressed in this talk. The new framework integrates a strain-based geometrically nonlinear beam model, a finite-state unsteady subsonic aerodynamic model, and a 6-DoF rigid-body flight dynamic formulation, which allows for the coupled nonlinear aeroelastic and flight dynamic analysis of highly flexible aircraft in free flight. With this framework, the coupled effects between the large deflection due to the vehicle flexibility and the flight dynamics, as well as other aeroelastic effects (e.g., flutter instability and gust response) can be properly accounted for. Some unique nonlinear aeroelastic characteristics of very flexible aircraft will be illustrated in this talk.**February 12, 2016**

Degui Zhi

Department of Biostatistics, University of Alabama at Birmingham**Title:**Statistics for genomic big data: genotype calling and haplotype phasing for next-generation sequencing**Abstract:**As we are entering the "$1000 genome" era, DNA sequencing generate enormous amount of complex genomic data. A central goal for modern genomic data science is to reconstruct the actual genomic sequences of an individual out of a large number of error-containing DNA fragments. It turns out that, because the fact that all humans are related, the best approach is to do the reconstruction jointly from a number of individuals using population genetics models. In this talk, we will start with a historical perspective over the main methods and algorithms for this problem, and discuss some latest results from our group.**February 19, 2016**

Toyin Alli

Department of Mathematics, University of Alabama**Title:**Statistical Networks with Applications in Economics and Finance**Abstract:**Due to the vast amount of economic and financial information to be stored and analyzed, the need for the study of high dimensional networks has increased dramatically. Typical approaches for determining the groupwise information to infer statistical networks include lasso and large covariance matrix estimation regularizations. In this talk, I will investigate the application of the nodewise lasso algorithm to U.S. economic and financial data over the past 50 years. I used the nodewise lasso to estimate statistical networks of varying sparsity levels to describe the conditional dependence structure of a dataset consisting of 131 U.S. macroeconomic time series. With these estimated networks, I describe how they can be chosen from and interpreted in the context of both the statistical literature and the existing economic theory to enlarge our knowledge of economic and financial network structure.**February 26, 2016 (Colloquium of Mathematics Department)**

Emil Alexov

Computational Biophysics and Bioinformatics, Department of Physics, Clemson University**Title:**Multi-scale modeling of kinesin motion along microtubule utilizing DelPhi Poisson-Boltzmann solver**Abstract:**Electrostatics plays major role in molecular biology because practically all atoms carry partial charge while being situated at Angstroms distances. Many biological phenomena involve the binding of proteins to a large object. Because the electrostatic forces that guide binding act over large distances, truncating the size of the system to facilitate computational modeling frequently yields inaccurate results. Here we report a multiscale approach that implements a computational focusing method that permits computation of large systems without truncating the electrostatic potential and achieves the high resolution required for modeling macromolecular interactions, all while keeping the computational time reasonable. We tested our approach on the motility of various kinesin motor domains. We found that electrostatics help guide kinesins as they walk: N-kinesins towards the plus-end, and C-kinesins towards the minus-end of microtubules. Our methodology enables computation in similar, large systems including protein binding to DNA, viruses, and membranes. Lab webpage: http://compbio.clemson.edu**March 4, 2016**

Janna Fierst

Department of Biological Sciences, University of Alabama**Title:**Sex in genetic and genomic evolution**Abstract:**One of the central problems in biology is understanding how sex influences evolution. Sexual reproduction increases genetic variation through meiotic recombination, and that single difference results in a broad array of consequences at genetic and phenotypic levels. My research aims to take an innovative approach to understanding the role of sex in genetic and genomic evolution by integrating evolutionary theory and modern molecular data. Evolutionary theory has a rich conceptual and mathematical history, but until recently we could not generate the necessary data to test theoretical predictions and produce informed, biologically grounded theory. Biological technology is rapidly advancing on all fronts, and recent progress in computing, high-throughput sequencing, and molecular biology means that we are now poised to test many fundamental theoretical predictions regarding genetic evolution. I am currently pursuing two main areas of research: 1) The influence of reproductive mode on the genome evolution; and 2) Sex and system-level evolution.**March 25, 2016**

Jose E. Castillo

Department of Mathematics and Statistics, San Diego State University**Title:**3D viscoelastic anisotropic seismic modeling with high-order mimetic finite-differences**Abstract:**We present a scheme to solve three-dimensional viscoelastic anisotropic wave propagation on structured staggered grids. The scheme uses a fully-staggered grid (FSG) or Lebedev grid, which allows for arbitrary anisotropy as well as grid deformation. This is useful when attempting to incorporate a bathymetry or topography in the model. The correct representation of surface waves is achieved by means of using high-order mimetic operators, which allow for an accurate, compact and high-order solution at the physical boundary condition. Furthermore, viscoelastic attenuation is represented with a generalized Maxwell body approximation, which requires auxiliary variables to model the convolutional behavior of the stresses in lossy media. We present the scheme's accuracy with a series of tests against analytical and numerical solutions. Similarly we show the scheme's performance in high-performance computing platforms. Due to its accuracy and simple pre- and post-processing, the scheme is attractive for carrying out thousands of simulations in quick succession, as is necessary in many geophysical forward and inverse problems both for the industry and academia.**April 1, 2016**

Mingyi Hong

Department of Industrial and Manufacturing Systems Engineering, Iowa State University**Title:**Iteration Complexity Analysis of Block Coordinate Descent Methods: Sublinear Convergence and Improved Rates**Abstract:**In the first part the talk, we provide a unified iteration complexity analysis for a family of general block coordinate descent (BCD) methods, covering popular methods such as the block coordinate gradient descent (BCGD) and the block coordinate proximal gradient (BCPG), under various different coordinate update rules. We unify these algorithms under the so-called Block Successive Upper-bound Minimization (BSUM) framework, and show that for a broad class of multi-block nonsmooth convex problems, all algorithms covered by the BSUM framework achieve a global sublinear iteration complexity of O(1/r), where r is the iteration index. Moreover, for the case of block coordinate minimization (BCM) where each block is minimized exactly, we establish the sublinear convergence rate of O(1/r) without per block strong convexity assumption. Further, we show that when there are only two blocks of variables, a special BSUM algorithm with Gauss-Seidel rule can be accelerated to achieve an improved rate of O(1/r^2).

However, these bounds are all explicitly dependent on K (the number of variable blocks), and are at least K times worse than those of the gradient descent (GD) and proximal gradient (PG) methods. In the second part of the talk, we close such theoretical performance gap between BCD and GD/PG. First we show that for a family of quadratic nonsmooth problems, the complexity bounds for BCD and its popular variant Block Coordinate Proximal Gradient (BCPG) can match those of the GD/PG in terms of dependency on K. Our bounds are sharper than the known bounds for cyclic BCD by at least a factor of K. Second, we show an improved iteration complexity bound for general convex problems.**Bio:**Mingyi Hong received his B.E. degree in Communications Engineering from Zhejiang University, China, in 2005, his M.S. degree in Electrical Engineering from Stony Brook University in 2007, and Ph.D. degree in Systems Engineering from University of Virginia in 2011. From 2011 to 2014 he was with the Department of Electrical and Computer Engineering, University of Minnesota, first as a Post-Doctoral Fellow, then a Research Associate and a Research Assistant Professor. He is currently a Black & Veatch Faculty Fellow and an Assistant Professor with the Department of Industrial and Manufacturing Systems Engineering and the Department of Electrical and Computer Engineering (by courtesy), Iowa State University. His research interests are primarily in the fields of large-scale optimization theory, statistical signal processing, next generation wireless communications, and their applications in big data related problems.**April 8, 2016**

Adam Branscum

Biostatistics Program, College of Public Health and Human Sciences, Oregon State University**Title:**New Developments in Bayesian Semiparametric Regression**Abstract:**Fresh approaches to flexible Bayesian modeling of complex data using Polya trees and Dirichlet processes will be illustrated for linear regression, risk regression, ROC regression, and Youden index regression. Specifically, a new method is introduced for risk regression with continuous response data that simultaneously provides a goodness of fit test of logistic regression and the opportunity for semiparametric estimation of risks, risk ratios, and odds ratios. Computational methods for empirical and fully Bayesian inference are considered, and theoretical results establishing the consistency of an empirical Bayes goodness of fit test are presented. Dependent Polya trees provide the foundation for a novel semiparametric regression model with the flexibility to accommodate evolving residual distributions; the methodology can be used in a wide range of regression models, including in linear and nonlinear fixed-effects, random-effects, and mixed-effects models. A dependent Dirichlet process mixture model with B-splines is used to determine the covariate-specific accuracy of and optimal cutoff value for a diagnostic medical test by estimating a popular summary measure of test accuracy, namely the Youden index. Important theoretical results on the support properties of the model are discussed. Applications to lung cancer diagnosis, immune function during childhood, obesity, and the age-specific accuracy of glucose as a biomarker of diabetes will be presented.**April 15, 2016**

Wei Cui

Department of Mathematics, University of Alabama**Title:**Fractional Brownian Motion and Application in Hedging Strategy**Abstract****April 22, 2016**

Hwan-Sik Yoon

Department of Mechanical Engineering, University of Alabama**Title:**Automatic Control of Excavator Manipulator using Neural Network-Based Position Estimator**Abstract:**Hydraulic excavators perform numerous tasks in the construction and mining industry. Although ground grading is a common operation, proper grading cannot easily be achieved as it requires a coordinated control of the boom, arm, and bucket cylinders. Due to this reason, automated grading control is being considered as an effective alternative to conventional human-operated ground grading. In this research, a path-planning method based on a 2D kinematic model and inverse kinematics is used to determine the desired trajectory of an excavator's boom, arm, and bucket cylinders. Then, the developed path planning method and PI control algorithms for the three cylinders are verified by a simple excavator model developed in Simulink. For the feedback control of the cylinder displacements, a neural network-based computer vision system is developed and used to estimate position of an excavator manipulator in real time. The simulation results show that the proposed grade control strategy has the potential to automate the basic grading operation of a hydraulic excavator.**April 29, 2016**

Summer Atkins

Department of Mathematics, University of Alabama**Title:**Fast Classification of Big Data: Proximal Methods for Sparse Discriminant Analysis**Abstract**Linear discriminant analysis (LDA), a classical technique for supervised classification, is known to fail while interpreting data where the number of features of the data set is larger than the number of observations. To address this issue, Clemmensen et al. (2011) developed a sparse version of linear discriminant analysis (LDA) called sparse discriminant analysis (SDA) which allows feature selection and classification to be simultaneously performed. SDA classification performance shows great promise, however its execution timing is slow relative to other existing approaches for LDA in the high-dimensional setting. To improve efficiency of SDA, we propose three new heuristics, which apply the following techniques for solving the SDA problem: alternating direction method of multipliers, proximal gradient method, and accelerated proximal gradient method. We empirically demonstrate the effectiveness of our new versions of SDA for classifying simulated data and data drawn from applications in time-series classification.

**May 6, 2016**

Miloud Sadkane

Department of Mathematics, University of Brest, France**Title:**Computing the distance to the set of unstable quadratic matrix polynomials**Abstract:**This talk considers the computation of the distance from a stable quadratic matrix polynomial to a nearest unstable one in the discrete or continuous sense. The distance problem is recast as a palindromic eigenvalue problem for which a structure preserving algorithm is developed.

## Applied Math Seminar, Fall 2015

Organized by Shan Zhao

**Time: 3:30 - 4:30 pm, Fridays**

Location: 228 Gordon Palmer Hall, Department of Mathematics, University of Alabama

Location: 228 Gordon Palmer Hall, Department of Mathematics, University of Alabama

**Sept 4, 2015**

Min Sun

Department of Mathematics, University of Alabama**Title:**Interval Search Algorithms for Global Optimization**Abstract:**In this talk, the standard framework of interval algorithms for global optimization of continuous functions is reviewed, followed by introduction of several recent strategies for acceleration of the standard interval algorithm. One major improvement strategy is in the memory management, and another important improvement is about effective treatment of special constraints in order to obtain better accuracy of the upper bound of the optimal objective function value. Some analytic results and supporting numerical test outcomes are presented. Examples of potential applications of such optimization methods are suggested.

**Sept 11, 2015**

Stavros Belbas

Department of Mathematics, University of Alabama**Title:**Dynamic stability for integrodifferential equations**Abstract:**We study the extension of Hill's method of infinite determinants to the case of integrodifferential equations with periodic coefficients and kernels. We develop the analytical theory of such methods, and we obtain certain qualitative properties of the equations that determine the boundaries between regions of dynamic stability and dynamic instability.

**Sept 18, 2015**

Hassan Fathallah-Shaykh

Department of Neurology, University of Alabama at Birmingham**Title:**Model of Malignant Brain Tumors: From a PDE to the Clinics and Biology**Abstract:**Glioblastoma multiforme (GBM) is a malignant brain tumor with poor prognosis and high morbidity due to its invasiveness. Hypoxia-driven motility and concentration-driven motility are two mechanisms of GBM invasion of the brain. The use of anti-angiogenic drugs has uncovered new progression patterns of GBM associated with significant differences in overall survival times. I will discuss a concise system of equation that models the biology of GBM including the motility, replication, and angiogenesis/anti-angiogenesis . The model, built at the scale of clinical magnetic resonance imaging, replicates the imaging features of GBM and uncovers a previously unknown pattern of progression, which was clinically validated. The model is also applied to conduct in silico clinical trials; the results identify novel motility-based GBM phenotypes, effective therapeutic strategies in each of the GBM phenotypes, and a correlation of therapeutic efficacy with overall survival times. This investigation underscores the potential significance of mathematical models.

**Sept 25, 2015**

Lin Li

Department of Metallurgical and Materials Engineering, University of Alabama**Title:**Mesascale Material Modeling for Advanced Metallic Systems**Abstract**

**Oct 2, 2015**

Roger B. Sidje

Department of Mathematics, University of Alabama.**Title:**Efficient solution of the chemical master equation by a Krylov-based finite state projection guided by the stochastic simulation algorithm**Abstract:**Solving the chemical master equation (CME) allows us to model and simulate the stochastic behavior of biochemical reactions that take place within a biological cell. The mathematical framework is a continuous time Markov chain with a discrete state space that describes the composition of molecules inside the cell. Computing the transient probability distribution of this Markov chain allows us to track the composition over time, and this has important practical applications. However, solving the CME is challenging because the state space is very large or even countably infinite. Truncation and approximation techniques such as the finite state projection and inexact Krylov subspace techniques lead to reduce-sized problems that capture enough of the cell dynamics. But these problems can still be quite large. We show how striking improvements can be further achieved by combining these reduction techniques with the stochastic simulation algorithm (SSA). This work is supported by NSF grant DMS-1320849.

**Oct 9, 2015**

Lunji Song

School of Mathematics and Statistics, Lanzhou University, China**Title:**Superconvergence property of an over-penalized discontinuous Galerkin finite element gradient recovery method**Abstract:**A polynomial preserving recovery method is introduced for over-penalized symmetric interior penalty discontinuous Galerkin solutions to a quasi-linear elliptic problem. As a post-processing method, the polynomial preserving recovery is superconvergent for the linear and quadratic elements under specified meshes in the regular and chevron patterns, as well as general meshes satisfying Condition(\epsilon?, \sigma). By means of the averaging technique, we prove the polynomial preserving recovery method for averaged solutions is superconvergent, satisfying similar estimates as those for conforming finite element methods. We deduce superconvergence of the recovered gradient directly from discontinuous solutions and naturally construct an a posteriorierror estimator. Consequently, the a posteriori error estimator based on the recovered gradient is asymptotically exact. Extensive numerical results consistent with our analysis are presented.

**4-5pm, Oct 16, 2015 (Colloquium of Department of Mathematics)**

Hailiang Liu

Department of Mathematics, Iowa State University.**Title:**Entropy Satisfying Numerical Methods for Fokker-Planck-type Equations**Abstract:**Kinetic Fokker-Planck equations arise in many applications, and thus there has been considerable interest in the development of accurate numerical methods to solve them. The peculiar feature of these models is that the transient solution converges to certain equilibrium when time becomes large. For the numerical method to capture the long-time pattern of the underlying solution, some structure preserving methods have been designed to preserve physical properties exactly at the discrete level. I shall explain the main ideas and challenges through several model equations in different applications. Numerical results are reported to illustrate the capacity of the proposed algorithms.

**Oct 23, 2015**

Huy D. Vo

Department of Mathematics, University of Alabama.**Title:**Direct solution of the chemical master equation for the p53 regulation**Abstract:**A stochastic model of cellular p53 regulation was proposed in [G. B. Leenders, and J. A. Tuszynski, Frontiers in Oncology, 3(64):116, 2013] to study the interactions of p53 with MDM2 proteins. The role of stochasticity in determining the behavior of the system was studied there using stochastic simulation. We revisit the previous study by using an alternative computational strategy, namely to solve the chemical master equation (CME) directly by a fast adaptive finite state projection method. Numerical results demonstrating the feasibility of the proposed approach are reported. This work is supported by NSF grant DMS-1320849.

**Nov 6, 2015**

Laurentiu Nastac

Department of Metallurgical and Materials Engineering, University of Alabama**Title:**Advances on Experimental and Numerical Modeling of Al-based Alloys and Nanocomposites Fabricated via Ultrasonic and Electromagnetic Processing**Abstract:**The metal-matrix-nano-composites (MMNCs) in this study consist of an Al alloy matrix reinforced with 1.0 wt.% SiC 50 nm diameter nanoparticles that are dispersed within the molten alloy matrix using ultrasonic cavitation (UST) and induction melting technologies. The required ultrasonic parameters to achieve the required cavitation for adequate degassing and refining of the Al alloy as well as the fluid flow characteristics for uniform dispersion of the nanoparticles into the 6061 alloy matrix are being investigated in this study by using a magneto-hydro-dynamics (MHD) model and an UST model. The MHD model accounts for turbulent fluid flow, heat transfer and solidification, electromagnetic field as well as the complex interactions between the solidifying alloy and nanoparticles by using ANSYS Maxwell and ANSYS Fluent Dense Discrete Phase Model (DDPM) and a particle engulfment and pushing (PEP) model. The MHD model is coupled with a stochastic microstructure model to predict the formation of the microstructure during the UST and electromagnetic stirring (EM) processing of alloys and nanocomposites. The effects of UST on the solidifying microstructure of the A356-based alloys and nanocomposites was studied experimentally and numerically. Fine globular grain structures (of about 10-20 microns) were observed in the cast samples obtained via UST during solidification. Also, the eutectic microstructure was greatly modified when UST was applied during solidification.**BIOSKETCH:**Dr. Laurentiu Nastac is an Associate Professor of Metallurgical and Materials Engineering at the University of Alabama, Metallurgical and Materials Engineering Department, Tuscaloosa, AL, a Key FEF Professor and the Director of the Solidification Laboratory and of the UA-COE foundry. For his teaching and research activities please visit his website: http://lnastac.people.ua.edu/. Dr. Nastac developed 8 software tools, made over 160 presentations, co-authored 3 patents, over 150 publications and more than 70 scientific reports in the materials science and manufacturing fields, and co-authored 8 books, one is a monograph titled "Modeling and Simulation of Microstructure Evolution in Solidifying Alloys" published by Springer in 2004.

**4-5pm, Nov 13, 2015 (Colloquium of Department of Mathematics)**

Todd Burwell

Manager, Engineering Mathematics Group, Boeing Research & Technology**Title:**An Overview of Applied Mathematics at Boeing**Abstract:**In this talk we will give an overview of Boeing Research and Technology and discuss how we support the major Boeing business units. We will discuss research and consulting in Applied Mathematics in an industrial setting and give a few examples from Statistics and Operations Research consulting. We will then finish with a discussion on current research on approximate modeling and Multiobjective Optimization.

**Nov 20, 2015**

Mingwei Sun

Department of Mathematics, University of Alabama**Title:**Bayesian Nonparametric Multivariate EWMA Control Chart for Process Changepoint Detection**Abstract:**Multivariate control charts for monitoring multivariate process commonly assume that the observations are from multinormal distribution, which may not hold in many practical applications. And many multivariate control charts can only detect the shifts in mean instead of scale or both. In this paper, a Bayesian nonparametric multivariate exponentially weighted moving average control chart for sequential observations monitoring the process mean and variability simultaneously by a single control chart in phase II applications is proposed. We introduce a Bayesian nonparametric test statistic based on evolving density estimates. A novel evolving exponentially-weighted density estimate based on a Polya tree predictive rule, which is centered at the widely-used normal families, is found to have excellent power and robustness to detect both location and scale shifts, as well as shifts in skew and modality in simulations. The procedure is further demonstrated on multivariate real data in ambulatory monitoring.

## Applied Math Seminar, Spring 2015

Organized by Shan Zhao

**Time: 3:30 - 4:30 pm, Fridays**

Location: 228 Gordon Palmer Hall, Department of Mathematics, University of Alabama

Location: 228 Gordon Palmer Hall, Department of Mathematics, University of Alabama

**January 16, 2015**

Layachi Hadji

Department of Mathematics, University of Alabama.**Title:**Nonlinear convection in unbounded regions**Abstract:**In the past half century, perturbations methods have been successful in finding stable solutions to the equations governing nonlinear convection, namely the Navier-Stokes equations coupled with the energy conservation equation, in systems with horizontal boundaries such as the Rayleigh-BÃ©nard set-up. In the absence of horizontal boundaries, such as the infinite vertical channel (IVC) problem or unbounded and uniformly stratified (UUS) regions, the methods fail to capture the nonlinear solutions. In this talk, which is based on work done with my graduate student Rishad Shahmurov, I will discuss the recently discovered similarity type solutions to the IVC problem. These solutions are found to be stable to general two-dimensional, time-dependent disturbances. Furthermore, when the analysis is extended to the UUS case, we find that the fluid becomes linearly unstable through a Batchelor-Nitsche (BN) instability mechanism. Thus, the nonlinear solutions are obtained through a long wavelength expansion, and consequently our analysis also provides the nonlinear development of the BN instability.

**January 30, 2015**

Waller Russel

Department of Mathematics, University of Alabama**Title:**Topology, dynamics, and data**Abstract:**In this talk, we apply some tools used to study the interaction between topology and dynamics to the study of data. In particular, we use combinatorial methods to study Anosov flows and pseudo-Anosov flows on graph manifolds. As demonstrated by W. Thurston and G. Perelman, graph manifolds are precisely the 3-manifolds with vanishing simplicial volume. This means, loosely, that they are amenable to approximation by simplices, and are thus of special interest within the burgeoning new field of topological data analysis. Having equipped these graph manifolds with an Anosov or pseudo-Anosov flow, the extra structure provided by the dynamics reveals additional information about the topological structures modeling the data, and also allows researchers to manipulate the data being modeled while preserving its salient topological features.

**February 13, 2015 (Colloquium of Department of Mathematics)**

Yujiang Wu

School of Mathematics and Statistics, Lanzhou University, China.**Title:**Lopsided PMHSS method for Complex Systems**Abstract:**Based on the preconditioned modified Hermitian and skew-Hermitian splitting (PMHSS) iteration method, we introduce a lopsided PMHSS (LPMHSS) iteration method for solving a broad class of complex symmetric linear systems. The convergence properties of the LPMHSS method are analyzed, which show that, under a loose restriction on parameter $\alpha$, the iterative sequence produced by LPMHSS method is convergent to the unique solution of the linear system for any initial guess. Furthermore, we derive an upper bound for the spectral radius of the LPMHSS iteration matrix, and the quasi-optimal parameter $\alpha$ which minimizes the above upper bound is also obtained. Both theoretical and numerical results indicate that the LPMHSS method outperforms the PMHSS method when the real part of the coefficient matrix is dominant. (Joint work with A.L. Yang and X. Li)

**February 20, 2015**

Yong Zhang

Department of Geological Sciences, University of Alabama.**Title:**Fractional diffusion equations: Lagrangian approximation and tempered stable**Abstract:**This talk will introduce the numerical approximation and hydrological applications of fractional-order diffusion equations (FDEs). We will focus on 1) a tempered stable model (truncating the standard alpha-stable density) to simulate preasymptotic transport in heterogeneous media (i.e., aquifers, fractures, rivers, and soils), and 2) a general Lagrangian solver to approximate various FDEs. We propose a three-step fractional adjoint method combined with a time-domain Langevin equation, which lead to the particle-tracking based fully Lagrangian solver. Such a numerical solver can be regarded as a specific continuous time random walk with Lévy motion in space, time, or both, providing discrete stochastic approximations for the FDEs. We will show that the grid-free Lagrangian solver interprets the dynamics underlying the target FDEs and provides the only viable tool for the vector FDE with space-dependent parameters and multiscaling spreading rates.

**February 27, 2015**

Ryan Hartman

Department of Chemical and Biological Engineering, University of Alabama.**Title:**Flow Chemistry with Microchemical Systems for Chemicals, Energy, Healthcare,and Sustainability**Abstract**

**March 6, 2015**

Douglas Shepherd

Department of Physics, University of Colorado Denver**Title:**Information from fluctuation: multiscale stochastic analyses to improve efficiency of single-cell studies**Abstract:**There has been an explosion of quantitative biochemical, imaging, and computational techniques that enable investigations into biological regulation at the level of individual cells. One key observation from these techniques is highly variable cell-to-cell expression of messenger RNA in response to external stimuli in populations of otherwise identical cells. Reduced-order stochastic mathematical frameworks have been successful at predicting the shape and nature of this cell-to-cell variability in genetic regulatory networks in a number of different organisms. Here, we present a mathematical framework that considers the temporal, spatial, and cell-to-cell variability of messenger RNA, based on our previous efforts in modeling genetic expression. This framework allows one to identify the most informative experiments to reduce both model and parameter uncertainty. The resulting insight allows us to propose a multi-scale modeling approach that can extract more information from less experimental data while also reducing computational costs by orders of magnitude.

**March 13, 2015**

Leighton Wilson

Department of Mathematics, University of Alabama**Title:**Unconditionally stable time splitting methods for the electrostatic analysis of solvated biomolecules**Abstract:**In this talk, we introduce unconditionally stable operator splitting methods for solving the time dependent nonlinear Poisson-Boltzmann (NPB) equation, a framework vital to the electrostatic analysis of solvated biomolecules. In a pseudo-transient continuation solution of the NPB equation, a long time integration is needed to reach the steady state. This calls for time stepping schemes that are stable and accurate for large time increments. The existing alternating direction implicit (ADI) methods for the NPB equation, although fully implicit, are only conditionally stable. To overcome this difficulty, we propose several new operator splitting schemes. We also consider further accuracy improvements to the new schemes, including Richardson extrapolation. In addition, we present some preliminary results on increasing stability for ADI methods using a regularization scheme.

**March 27, 2015**

Duc Nguyen

Department of Mathematics, University of Alabama**Title:**Time-Domain Matched Interface and Boundary Methods for Transverse Electric Modes with Complex Dispersive Interfaces**Abstract:**The material is dispersive when its permittivity or permeability are functions of frequency. Therefore, the dispersive material is often used to simulate the electromagnetic waves' movements in the complex environment such as in soils, rock, ice, snow, and biological tissue. As a result, it plays an important role in numerous electromagnetic applications. For instance, the ground penetrating radar (GPR) and microwave imaging for early detection of breast cancer are involved in dealing with dispersive soil and dispersive tissue respectively. It is known that the transverse electric (TE) Maxwell's equations with the presence of the dispersive media produce non-smooth and discontinuous solutions. We formulate the interface auxiliary differential equations (IADEs) to acquire evanescent changes of the field regularities along the interface. A novel matched interface boundary time-domain (MIBTD) based on the leapfrog scheme is proposed to rigorously implement the time-dependent jump conditions. Numerical tests indicate the second order of accuracy is achieved in both $L_\infty$ and $L_2$ norms when dealing with the complex interfaces

**April 3, 2015**

Nguyen Hoang

Department of Mathematics, University of West Georgia**Title:**On node selection for pseudo-spectral collocation methods**Abstract:**In this talk I will discuss several choices of nodes for pseudo-spectral collocation methods. A justification for the "optimality" of the scaled-Chebyshev nodes for interpolation is given. Node distributions which could yield better results than the most commonly used Chebyshev-Gauss-Lobatto nodes for approximating derivatives of functions and for approximating antiderivatives of functions are proposed. A fast algorithm for computing pseudo-spectral integration matrices is also discussed.

**April 10, 2015 (Colloquium of Department of Mathematics)**

Timothy Hanson

Department of Statistics, University of South Carolina**Title:**Recent Advances in Bayesian Spatial Survival Modeling**Abstract:**With the availability of large cancer registries such as SEER (http://seer.cancer.gov/), survival data on spatially referenced outcomes has become more routinely encountered over the last decade. A review of modeling time-to-event data for spatially-correlated outcomes is provided, focusing on traditional frailty models within the context of proportional hazards, accelerated failure time, proportional odds, and nonparametric approaches. Then, recent advances in marginal modeling through spatial copulas are presented. Analyses of several data sets broadly illustrate the pros and cons of different survival and spatial correlation models.

**April 17, 2015**

Brian Munsky

Department of Chemical and Biological Engineering, Colorado State University**Title:**Quantifying, Modeling and Predicting Stochastic Spatiotemporal Fluctuations of Signal-Activated Gene Expression**Abstract:**Spatial, temporal and stochastic fluctuations cause genetically identical cells to exhibit wildly different behaviors. Often labeled "noise," these fluctuations are frequently considered a nuisance that compromises cellular responses, complicates modeling, and makes predictive understanding all but impossible. However, if we examine cellular fluctuations more closely and match them to discrete stochastic analyses, we discover an untapped, yet powerful information resource [1]. In this talk, I will present our collaborative endeavors to integrate single-cell experiments with precise stochastic analyses to gain new insight and quantitatively predictive understanding for Mitogen Activated Protein Kinase (MAPK) signal-activated gene regulation. I will explain how we experimentally quantify transcription dynamics at high temporal (1-minute) and spatial (1-molecule) resolutions; how we use precise computational analyses to model this data and efficiently infer biological mechanisms and parameters; how we predict and evaluate the extent to which model constraints (i.e., data) and uncertainty (i.e., model complexity) contribute to our understanding, and how we design novel experiments to rapidly and systematically improve this understanding. I will illustrate the effectiveness of our integrated approach with the identification of predictive models for MAPK induction of transcription in yeast [2] and mammalian [3] systems.

References

1. B. Munsky, G. Neuert and A. van Oudenaarden, Science, 2012, 336, 6078, 183-187.

2. G. Neuert, B. Munsky, et al, Science, 2013, 339, 6119, 584-587.

3. A. Senecal, B. Munsky, et al, Cell Reports, 2014, 8,1, 75-83.

**April 24, 2015**

Miloud Sadkane

Department of Mathematics, University of Brest, France**Title:**Improving and extending the Davison-Man method**Abstract:**The Davison-Man method is an iterative technique for solving Lyapunov equations for which the approximate solution is updated through matrix integrals and doubling procedures. In theory, the convergence is quadratic and, in practice, there are examples where the method stagnates and no further improvement is seen. In this talk an implementation that avoids stagnation is proposed. The implementation is applicable to Lyapunov and Sylvester equations and has essentially optimal efficiency. Finally, an extension to large-scale case is presented and its convergence properties are analyzed.

**May 8, 2015 (Colloquium of Department of Mathematics)**

2 - 3 pm, at GP 302

Shibo Liu

School of Mathematical Sciences, Xiamen University, China**Title:**Minimization Methods and Existence of Solutions for Nonlinear Differential Equations**Abstract**

## Applied Math Seminar, Fall 2014

Organized by Shan Zhao

**Time: 3:30 - 4:30 pm, Fridays**

Location: 155 Gordon Palmer Hall, Department of Mathematics, University of Alabama

Location: 155 Gordon Palmer Hall, Department of Mathematics, University of Alabama

**September 5, 2014**

Shan Zhao

Department of Mathematics, University of Alabama.**Title:**New Developments of Alternating Direction Implicit (ADI) Algorithms for Biomolecular Solvation Analysis**Abstract:**In this talk, I will first present some tailored alternating direction implicit (ADI) algorithms for solving nonlinear PDEs in biomolecular solvation analysis. Based on the variational formulation, we have previously proposed a pseudo-transient continuation model to couple a nonlinear Poisson-Boltzmann (NPB) equation for the electrostatic potential with a geometric flow equation defining the biomolecular surface. To speed up the simulation, we have reformulated the geometric flow equation so that an unconditionally stable ADI algorithm can be realized for molecular surface generation. Meanwhile, to overcome the stability issue associated with the strong nonlinearity, we have introduced an operator splitting ADI method for solving the NPB equation. Motivated by our biological applications, we have also recently carried out some studies on the algorithm development for solving the parabolic interface problem. A novel matched ADI method has been developed to solve a 2D diffusion equation with material interfaces involving complex geometries. For the first time in the literature, the ADI finite difference method is able to deliver a second order of accuracy in space for arbitrarily shaped interfaces and spatial-temporal dependent interface conditions.

**September 12, 2014**

Lunji Song

School of Mathematics and Statistics, Lanzhou University, China**Title:**Interior Penalty Discontinuous Galerkin Methods with Implicit Time-Integration Techniques for Nonlinear Parabolic Equations**Abstract:**We prove existence and numerical stability of numerical solutions of three fully discrete interior penalty discontinuous Galerkin methods for solving nonlinear parabolic equations. Under some appropriate regularity conditions, we give the l2(H1) and l_infty(L2) error estimates of the fully discrete symmetric interior penalty discontinuous Galerkin-scheme with the implicit theta-schemes in time, which include backward Euler and Crank-Nicolson finite difference approximations. Our estimates are optimal with respect to the mesh size h. The theoretical results are confirmed by some numerical experiments.

**September 19, 2014**

Chuan Li

Department of Mathematics, University of Alabama**Title:**Solving the Poisson-Boltzmann equation and calculating electrostatics via parallel computing for large marcromolecules and complexes**Abstract:**One common approach to study electrostatics in molecular biology is via numerically solving the Poisson-Boltzmann equation (PBE) and calculating the electrostatic potential and energies. However, all existing numerical methods for solving the PBE become intolerably slow due to high computational cost when macromolecules and complexes are large enough to consist of hundreds of thousands of charged atoms. Parallel computing is a cutting-edge technique which teams up multiple computing units and significantly speeds up the calculation. In this talk, I will present a set of parallel computing algorithms developed to solve the PBE. As a demonstration of efficiency and capability of these algorithms, computational results obtained by implementing these algorithms in the program DelPhi on real macromolecules and complexes are given as well.

**September 26, 2014 (Colloquium of Department of Mathematics)**

Lili Ju

Department of Mathematics, University of South Carolina**Title:**A Parallel Computational Model for 3D Thermo-Mechanical Stokes Flow Simulations of Ice Sheets**Abstract:**In this talk we focuses on the development of an efficient, three-dimensional, thermo-mechanically coupled, nonlinear Stokes flow computational model for ice sheet simulation. The model features stable and high-order accurate discretizations on variable resolution grids. In particular, we employ a locally mass-conserved finite element approximation for the Stokes problem, an efficient iterative solution method for treating the viscosity nonlinearity, an accurate finite element solver for the temperature equation, and a conservative finite volume solver for handling change of ice thickness. We demonstrate efficiency and physical reliability of the Stokes model using various numerical tests on manufactured solutions, benchmark experiments and the realistic Greenland ice-sheet.

**October 3, 2014**

Charles O'Neill

Aerospace Engineering and Mechanics, University of Alabama**Title:**Practical CFD in an Aircraft Prototyping Environment

-- Necessity is the mother of all inventions.**Abstract:**Fast and lean is returning to the aerospace field by necessity. My research interest is injecting rapid design tools and engineers into fast-paced aircraft design firms.

After two generations of aircraft design cycles measured in decades (e.g. F-35), US aviation is being squeezed by a rapid advance in technology, increased international competition, and constrained funding. It is important to realize that the current aerospace field is not mature but rather a saturated process in a tightly coupled multi-disciplinary design space with over-constrained requirements. The primary limitation is the time required for a design cycle.

This talk provides an introduction to aerodynamics engineering in industry and how computational fluid dynamics (CFD) is involved in actual prototype aircraft design. The talk intends to provide a discussion of possible interactions between engineers and mathematicians for the development of CFD techniques and capabilities.

**October 10, 2014**

Brendan Ames

Department of Mathematics, University of Alabama**Title:**Alternating Direction Methods for Dimensionality Reduction, Classification, and Feature Selection**Abstract:**Linear Discriminant Analysis (LDA) is a classical technique for dimensionality reduction in supervised classification which relies on projecting the given training data to a lower dimensional space where items in the same class are projected closer to each other than those in other classes. This process is typically performed using a simple change of variables and the solution of the resultant eigenproblem. Unfortunately, this approach fails in the high-dimensional setting where the data being processed contains fewer observations than features; in this case, we cannot perform the change of variables necessary to obtain this projection. In this talk, we present a modification, based on l1-regularization and the alternating direction method of multipliers, for performing LDA in this high-dimensional setting. Moreover, we describe how this approach can be extended to solve penalized eigenproblems in general, including those arising from Sparse Principal Component Analysis, and illustrate the efficacy of our approach on a variety of problems drawn from time-series classification.

**October 17, 2014**

Wei Zhu

Department of Mathematics, University of Alabama**Title:**Some Variational Models in Image Processing**Abstract:**Image processing is an active research field with lots of applications in medical diagnosis, pattern recognition, remote image processing, security check, etc. It aims to process raw images so that meaningful signal information can be captured and understood. During the last three decades, many mathematical tools have been employed in accomplishing different tasks in this field, including partial differential equations, variational methods, statistical methods, harmonic analysis, etc. Variational methods have proved to be particularly powerful and flexible for developing models in image processing. In this talk, I will first present some typical topics in this field, including image segmentation and denoising, and then review two well-known variational models --- the Mumford-Shah model and the Rudin-Osher-Fatemi model. I will also discuss some of my research works in this field, including both the modeling and the development of efficient numerical methods for different imaging problems.

**October 24, 2014**

Yuhui Chen

Department of Mathematics, University of Alabama**Title:**A New Bayesian Nonparametric Control Chart for Individual Measurements**Abstract:**Control chart, being a screening process, has been widely used in many fields where a quality monitoring is required for product quality improvement. The most commonly used control charts with data measured on a continuous scale usually assume the underlying distribution is a certain parametric family, such as normal. As such, the process might lead to the lack of in-control robustness and might not be sensitive for the out-of-control data if the underlying distribution is not as the assumed. In this paper, I propose a new Bayesian nonparametric control chart derived upon a newly developed nonparametric prior called transformed Bernstein polynomial prior (TBPP). This new proposed control chart can efficiently adjust, for the robustness, the initial guessing on the underlying distribution via the data. Due to the robustness and efficiency of the proposed method, it is possibly used in practice for monitoring a quality control process.

**November 7, 2014**

Yuanyuan Song

Department of Mathematics, University of Alabama**Title:**Nonlinear analysis of the influence of surfactant on the stability of a liquid bilayer inside a tube**Abstract:**The lung's airways are coated internally with a liquid bilayer consisting of a serous layer immediately coating the airway wall and a more viscous mucus layer which is exposed to the gas core. A surface tension instability at the interfaces may lead to the formation of liquid plugs that block the passage of air. This is known as airway closure. Here we consider this thin liquid bilayer coating within a compliant tube in the presence of an insoluble surfactant monolayer at the mucus-gas interface. Surfactant can reduce the surface tension and induce a surface stress gradient, both of which are stabilizing. Lubrication theory is used to derive a system of nonlinear evolution equations for the thickness of the layers, the location of the tube wall, and the surfactant concentration. The effects of various parameters, the thickness of the bilayer to the tube radius, the layer thicknesses ratio, the surface tension ratio, and the viscosity ratio between the two layers, and wall compliance parameters, are investigated by carrying out numerical simulations. For a single layer in a rigid tube, surfactant can increase the closure time by approximately a factor of five. However, for a bilayer in a compliant tube, the presence of surfactant slows down the closure time by a signicantly larger factor, as high as twenty times or even more.

**November 14, 2014**

David A. Dixon

Robert Ramsay Chair, Department of Chemistry, University of Alabama**Title:**Computational Chemistry: Chemical Accuracy and Errors at Different Scales**Abstract:**Computational chemistry can be used to reliably predict the properties of compounds with density functional theory and correlated molecular orbital theory. New basis sets coupled with effective core potentials, improved software, new correlation methods, and access to high performance, massively parallel computers make it possible to reliably calculate the energetic properties of many compounds. We will describe the software and applied mathematics issues and needs in terms of critical energy applications. As an example, the use of computational methods to design syntheses for new materials for catalysis, solar energy capture, and nuclear fuels, is in its infancy. We will describe the complex issues that need to be addressed for the design of new materials syntheses and initial progress on understanding basic steps in such reaction mechanisms.

**November 21, 2014**

Zhongsheng He

Department of Ecology and Environment, Fujian Agriculture and Forestry University, China**Title:**Cooperation and Competition Relationship in the regeneration study of an endangered plant, Castanopsis kawakamii**Abstract:**The aim of this study was to understand the relationship among tree species in forest gaps and non-gaps in order to reveal the seedlings' adaptability of Castanopsis kawakamii in different habitats. The results showed that: (1) the relationship of main species in forest gaps was basically share the heterogeneous resources and promote plants' growth after the formation of forest gaps. However, the competition relationship will be strong in the later stage of forest gaps and non-gaps. (2) By applying the method of single slope change point, the optimal sampling plots competition zone of C. kawakamii seedlings competition intensity were 1.68 and 2.00 meters distance from the objective trees. It will be convenient for us to investigate the competition trees around 2 meters distance from the objective trees in field practice. (3) Intraspecific competition and overall interspecific competition in forest gaps were higher than those of non-gaps. It should strengthen the protection of C. kawakamii seedlings in the early stage of forest gaps. Meanwhile, artificial gaps should be created to promote the growth of C. kawakamii seedlings and samplings after their height were up to 100~150 cm in non-gaps. In the present study, we should strengthen scientific management and reasonable protection for C. kawakamii natural forest. The results could provide a scientific basis for endangered plants biodiversity conservation and population regeneration.