Publications

Go to homepage.

Reinforcement learning for Multiple Goals in Goals-Based Wealth Management

This research addresses the complexities investors face when managing multiple financial goals, such as purchasing a home, funding education, or planning for retirement. Traditional financial planning methods often struggle to dynamically allocate resources among competing objectives, especially under uncertain market conditions.

The authors propose a reinforcement learning (RL) framework to optimize portfolio strategies across multiple goals. The RL agent learns to make investment decisions that balance the trade-offs between different objectives, adapting to changing financial landscapes and individual preferences. Key contributions of the paper include:

  1. Dynamic Goal Prioritization: The RL model dynamically adjusts the prioritization of financial goals based on real-time portfolio performance and evolving investor circumstances.
  2. Adaptation to Market Uncertainty: By interacting with simulated market environments, the RL agent learns to navigate uncertainties, enhancing the robustness of investment strategies.
  3. Personalized Investment Strategies: The framework accommodates individual investor preferences, tailoring strategies to align with specific risk tolerances and goal importance.

The study's findings suggest that RL-based approaches can outperform traditional static allocation methods, offering a more flexible and responsive tool for financial advisors and investors aiming to achieve multiple financial objectives.

For more details, read the full paper here.

A Unified Innovized Progress Operator for Performance Enhancement in Evolutionary Multi- and Many-Objective Optimization

This research introduces a machine learning-based Unified Innovized Progress (UIP) operator designed to simultaneously enhance both convergence and diversity in reference vector-based evolutionary multi- and many-objective optimization algorithms (RV-EMOAs). Traditional evolutionary algorithms often face challenges in efficiently balancing convergence towards the Pareto front and maintaining a diverse set of solutions across it. The UIP operator addresses these challenges by:

  1. Convergence Enhancement: It captures efficient search directions by mapping inter-generational solutions along different reference vectors.
  2. Diversity Enhancement: It improves the spread and uniformity of solutions by mapping intra-generational solutions across reference vectors.

A key advantage of the UIP operator is its generic applicability to various RV-EMOAs without necessitating additional solution evaluations beyond those required by the base algorithms.

The study's extensive experimental evaluation, comprising 24,056 runs on various multi- and many-objective problems, demonstrated that integrating the UIP operator with different RV-EMOAs resulted in statistically superior performance in approximately 36% of instances and equivalent or better performance in about 92% of cases compared to the respective base algorithms.

For more details, read the full paper here.

A Localized High-Fidelity-Dominance-Based Many-Objective Evolutionary Algorithm

This research addresses challenges in many-objective optimization, particularly the limitations of traditional Pareto-dominance methods when dealing with problems involving four or more objectives. The authors introduce the High-Fidelity-Dominance (HFiD) principle, which simultaneously considers three Human Decision-Making (HDM) elements:

  1. Number of Improved Objectives: Assessing how many objectives a solution improves upon compared to another.
  2. Extent of Improvements: Evaluating the magnitude of these improvements.
  3. Relative Preferences Among Objectives: Incorporating decision-makers' preferences regarding the importance of different objectives.

Building upon the HFiD principle, the paper proposes the Localized High-Fidelity-Dominance (LHFiD) approach. This method integrates the HFiD principle within a reference vector-based framework, aiming to enhance the search efficiency and solution quality in many-objective evolutionary algorithms.

The study includes an extensive experimental evaluation, comprising 41,912 experiments, to compare the performance of the LHFiD approach against existing many-objective evolutionary algorithms. The results demonstrate that LHFiD offers competitive advantages in handling complex optimization problems.

For more details, read the full paper here.