2. Foundations: Understanding ANFIS and PINN
2.1. Adaptive Neuro-fuzzy Inference Systems (ANFIS)
ANFIS represents a powerful hybrid intelligent system that merges the learning capabilities of Artificial Neural Networks (ANNs) with the knowledge representation and inference abilities of Fuzzy Inference Systems (FIS)
[1] | J.-S. R. Jang, “ANFIS: adaptive-network-based fuzzy inference system,” IEEE Trans Syst Man Cybern, vol. 23, no. 3, pp. 665–685, 1993, https://doi.org/10.1109/21.256541 |
[1]
. This integration enables ANFIS to learn complex nonlinear relationships between inputs and outputs while retaining the inherent interpretability of fuzzy rules
[1] | J.-S. R. Jang, “ANFIS: adaptive-network-based fuzzy inference system,” IEEE Trans Syst Man Cybern, vol. 23, no. 3, pp. 665–685, 1993, https://doi.org/10.1109/21.256541 |
[1]
. The architecture of ANFIS consists of five distinct layers, designed to mimic a Takagi-Sugeno-Kang fuzzy inferential system
[5] | Maathuis, C. and Scharringa E. “Hybrid AI Model for Proportionality Assessment in Military Operations.” International Conference on Cyber Warfare and Security (2025): n. pag. |
[5]
. Each layer performs a specific function in mapping inputs to outputs based on fuzzy IF-THEN rules
[1] | J.-S. R. Jang, “ANFIS: adaptive-network-based fuzzy inference system,” IEEE Trans Syst Man Cybern, vol. 23, no. 3, pp. 665–685, 1993, https://doi.org/10.1109/21.256541 |
[1]
:
1. Layer 1 (Input Layer): This layer receives crisp (non-fuzzy) input values and transmits them directly to the subsequent layer without any computation.
2. Layer 2 (Fuzzification Layer): In this layer, membership functions (e.g., triangular, trapezoidal, Gaussian, tangential, etc.) are applied to the input values, converting them into fuzzy sets and calculating their degrees of membership. These membership functions are parameterized and adjustable, forming what are known as "premise parameters". Each node in this layer typically represents a linguistic label, such as "low," "medium," or "high".
3. Layer 3 (Rule Layer): Each node within this layer corresponds to a specific fuzzy IF-THEN rule. The firing strength of each rule is computed by combining the membership values obtained from the fuzzification layer, usually through fuzzy operators like AND (e.g., minimum or product). This layer effectively represents the antecedent part of the fuzzy rules
.
4. Layer 4 (Normalization Layer): This layer normalizes the firing strengths calculated in the rule layer. The firing strength of each rule is divided by the sum of all firing strengths, ensuring that the collective contribution of all rules sums to unity. These normalized strengths are crucial as they determine the relative importance of each rule in contributing to the final output.
5. Layer 5 (Defuzzification/Output Layer): The final crisp output is computed in this layer as a weighted average of the rule outputs. This layer incorporates "consequence parameters," which are typically linear combinations of the input variables. Standard ANFIS architectures are designed to produce a single output
.
ANFIS employs a hybrid learning algorithm that combines gradient descent with least squares estimation
. During a forward pass, input signals propagate through the network, and the consequent parameters are identified using the least squares method. In a subsequent backward pass, error signals propagate backward through the network, and the premise parameters (those associated with membership functions) are updated using gradient descent. This iterative process continues until a desired level of accuracy is achieved. A primary advantage of ANFIS is its inherent interpretability and transparency, allowing for direct translation into a fuzzy rulebase that is readily understandable to humans. It is often characterized as a "grey-box" model, offering a more transparent alternative to opaque "black-box" deep neural networks
[2] | Ang, Y. K.; Talei, A.; Zahidi, I.; Rashidi, A. Past, Present, and Future of Using Neuro-Fuzzy Systems for Hydrological Modeling and Forecasting. Hydrology 2023, 10, 36. https://doi.org/10.3390/hydrology10020036 |
[2]
. This interpretability is vital for fostering trust in AI systems, especially in sensitive applications
[9] | Mantalas, E.-M.; Sagias, V. D.; Zacharia, P.; Stergiou, C. I. Neuro-Fuzzy Model Evaluation for Enhanced Prediction of Mechanical Properties in AM Specimens. Appl. Sci. 2025, 15, 7. https://doi.org/10.3390/app15010007 |
[9]
. ANFIS excels at uncertainty handling due to its foundation in fuzzy logic. Fuzzy logic inherently accommodates uncertainty and linguistic variables by allowing partial membership, making ANFIS robust in environments characterized by imprecise or vague information
[3] | Babuška, R. (2003). Neuro-Fuzzy Methods for Modeling and Identification. In: Abraham, A., Jain, L. C., Kacprzyk, J. (eds) Recent Advances in Intelligent Paradigms and Applications. Studies in Fuzziness and Soft Computing, vol 113. Physica, Heidelberg. https://doi.org/10.1007/978-3-7908-1770-6_8 |
[3]
. Furthermore, ANFIS demonstrates strong adaptability and nonlinear mapping capabilities. It can learn and adjust to complex nonlinear relationships from data, making it well-suited for dynamic scenarios and control systems
[1] | J.-S. R. Jang, “ANFIS: adaptive-network-based fuzzy inference system,” IEEE Trans Syst Man Cybern, vol. 23, no. 3, pp. 665–685, 1993, https://doi.org/10.1109/21.256541 |
[1]
. Its ability to continuously optimize performance and adapt to changing conditions in real-time is a significant benefit
[12] | Liu, Ke, Jing Ma and Edmund M.-K. Lai. “A Dynamic Fuzzy Rule and Attribute Management Framework for Fuzzy Inference Systems in High-Dimensional Data.” ArXiv abs/2504.19148 (2025): n. pag. https://doi.org/10.48550/arXiv.2504.19148 |
[12]
. The system also exhibits robustness to noise and efficacy with limited data. ANFIS has shown robust modeling performance, particularly when handling noisy data within nonlinear systems
[4] | Wankhade S, Sahni M, León-Castro E and Olazabal-Lugo M (2025) Navigating AI ethics: ANN and ANFIS for transparent and accountable project evaluation amidst contesting AI practices and technologies. Front. Artif. Intell. 8: 1535845. https://doi.org/10.3389/frai.2025.1535845 |
[4]
. It also possesses a rapid learning capability even with limited or unevenly distributed datasets
[4] | Wankhade S, Sahni M, León-Castro E and Olazabal-Lugo M (2025) Navigating AI ethics: ANN and ANFIS for transparent and accountable project evaluation amidst contesting AI practices and technologies. Front. Artif. Intell. 8: 1535845. https://doi.org/10.3389/frai.2025.1535845 |
[4]
. Despite its strengths, ANFIS faces certain limitations. A long-observed challenge is the trade-off between an algorithm's transparency and its predictive accuracy; interpretable systems often tend to be less accurate, while more accurate ones can be less transparent
[5] | Maathuis, C. and Scharringa E. “Hybrid AI Model for Proportionality Assessment in Military Operations.” International Conference on Cyber Warfare and Security (2025): n. pag. |
[5]
. Traditional ANFIS models may also struggle with high-dimensional and evolving data, as they often lack explicit mechanisms for dynamic rule and attribute management
. This can limit transparency and interpretability, and adversely affect performance in domains where data characteristics and operational conditions change rapidly. The parameter optimization process in Neuro-Fuzzy Systems (NFS) models can be challenging, as conventional methods are prone to getting trapped in local optimum points
[2] | Ang, Y. K.; Talei, A.; Zahidi, I.; Rashidi, A. Past, Present, and Future of Using Neuro-Fuzzy Systems for Hydrological Modeling and Forecasting. Hydrology 2023, 10, 36. https://doi.org/10.3390/hydrology10020036 |
[2]
. Additionally, standard ANFIS architecture is designed for a single output, which can be a constraint for multi-output systems
. Recent advancements have aimed to address some of these limitations. ADAR (Adaptive Dual-Weighting and Rule Management), for instance, integrates dual weighting mechanisms for attributes and rules with automated growth and pruning strategies. This streamlines complex fuzzy models, regulating rule complexity and feature importance to enhance scalability and transparency in high-dimensional and evolving data environments
. Dimensionality reduction techniques, such as deep auto-encoders, are employed to compress high-dimensional input data for NFS, significantly reducing computational complexity while maintaining predictive accuracy. The integration of optimization algorithms like Particle Swarm Optimization (PSO) and other metaheuristic algorithms can optimize the parameters of membership functions, leading to improved robustness, accuracy, and reduced model complexity. Hierarchical and modular NFS frameworks offer another approach by dividing high-dimensional input spaces into smaller, manageable subspaces where localized fuzzy inference is conducted independently before combining results. This strategy offers distinct advantages in interpretability and performance optimization
. Finally, Deep Convolutional Neuro-Fuzzy Inferential Systems (DCNFIS) represent a significant hybrid. This approach combines a Convolutional Neural Network's (CNN) convolutional base (acting as an automated feature extractor) with a modified ANFIS classifier. DCNFIS achieves state-of-the-art accuracy comparable to CNNs while retaining fuzzy rule interpretability and allowing for end-to-end training
[5] | Maathuis, C. and Scharringa E. “Hybrid AI Model for Proportionality Assessment in Military Operations.” International Conference on Cyber Warfare and Security (2025): n. pag. |
[5]
.
2.2. Physics-informed Neural Networks (PINN)
Physics-Informed Neural Networks (PINNs) represent a transformative paradigm in machine learning, integrating data-driven learning with the governing physical laws of a system
[14] | Malashin I, Tynchenko V, Gantimurov A, Nelyub V, Borodulin A. Physics-Informed Neural Networks in Polymers: A Review. Polymers (Basel). 2025 Apr 19; 17(8): 1108. https://doi.org/10.3390/polym17081108 |
[14]
. Fundamentally, PINNs convert the challenge of approximating solutions to complex physical phenomena, often described by differential equations, into an unconstrained minimization problem
[16] | Raissi, M., Perdikaris, P., & Karniadakis, G. E. (2019). "Physics-informed neural networks: A deep learning framework for solving forward and inverse problems3 involving nonlinear partial differential equations". Journal of Computational Physics, 378, 686-707. Available at: https://doi.org/10.1016/j.jcp.2018.10.045 |
[16]
. The core principle of PINNs lies in modifying the neural network's loss function
[15] | Ganga, Sai and Ziya Uddin. “Exploring Physics-Informed Neural Networks: From Fundamentals to Applications in Complex Systems.” ArXiv abs/2410.00422 (2024): n. pag. https://doi.org/10.48550/arXiv.2410.00422 |
[15]
. Unlike traditional neural networks that primarily minimize the discrepancy between predictions and observed data, PINNs incorporate an additional penalty term that quantifies the violation of the system's governing equations
. The total loss function (L) for a PINN is typically composed of a sum of different terms
:
1. Ldata (Data Loss): This is the conventional loss term, often a mean squared error, which minimizes the difference between the neural network's predictions and the actual observed data points.
2. Lequation (Equation Loss): This is the defining "physics-informed" component. It represents the residual of the governing differential equation(s) (e.g., Partial Differential Equations, PDEs). By minimizing this term, the neural network is compelled to satisfy the underlying physical laws.
3. L
IC (Initial/Boundary Condition Loss): An optional but frequently crucial term that enforces initial and/or boundary conditions. This ensures that the model finds the specific solution relevant to the problem, as differential equations often have infinitely many solutions without such constraints. The architecture of a PINN itself is typically a standard neural network, such as a multi-layer perceptron. However, its training objective is fundamentally altered by the inclusion of these physics-based terms in the loss function. Some advanced PINN architectures may even include non-trainable layers designed to enforce hard constraints
. PINNs excel at incorporating physical laws and enhancing generalization. By embedding known physical constraints, PINNs produce predictions that are significantly more precise and generalize better to unseen data
[14] | Malashin I, Tynchenko V, Gantimurov A, Nelyub V, Borodulin A. Physics-Informed Neural Networks in Polymers: A Review. Polymers (Basel). 2025 Apr 19; 17(8): 1108. https://doi.org/10.3390/polym17081108 |
[14]
. This ensures that the model adheres to fundamental scientific principles, which is critical for robustness in scientific and engineering applications. A notable advantage of PINNs is their data efficiency and reduced data dependency. They can learn effectively with limited or even no observational data, as their learning process is guided by the underlying equations. This makes them particularly suitable for scenarios where data collection is expensive, scarce, or impractical. PINNs offer a mesh-free formulation, providing an alternative to traditional numerical methods that require mesh generation. This characteristic allows for greater flexibility in handling complex geometries and can reduce manual effort in simulations. For differential equations where an analytical (closed-form) solution is unknown, PINNs can provide a robust alternative to conventional numerical methods, which can accumulate errors over many time steps and lead to a loss of accuracy
. Neural networks, and thus PINNs, are effective interpolators. This capability allows PINNs to provide prompt values for unseen data points that fall within the trained time interval, a task that traditional numerical methods might not perform as readily
[16] | Raissi, M., Perdikaris, P., & Karniadakis, G. E. (2019). "Physics-informed neural networks: A deep learning framework for solving forward and inverse problems3 involving nonlinear partial differential equations". Journal of Computational Physics, 378, 686-707. Available at: https://doi.org/10.1016/j.jcp.2018.10.045 |
[16]
. Furthermore, integrating physical knowledge directly into the training process helps to prevent overfitting, even when dealing with finite and noisy datasets
. The "physics-informed" nomenclature is also a generalization; the technique's broader applicability extends to any system with a well-defined mathematical model, not exclusively physical systems
. Despite their innovative features, PINNs have certain inherent restrictions. A significant drawback is that PINNs, in their prevalent form, utilize soft constraints and "cannot strictly satisfy" physical constraints in their predictions
. This means they typically strike a balance between approximating the ground truth and favoring first principles, which can potentially lead to physically inconsistent results in critical applications
. The development of "hard constraints (hPINN)" is an active research area, but rigorous mathematical guarantees for these are often lacking
. PINNs, being a relatively recent numerical approach, possess a less rigorous theoretical foundation and a scarcity of effective error analysis tools compared to established numerical methods
[16] | Raissi, M., Perdikaris, P., & Karniadakis, G. E. (2019). "Physics-informed neural networks: A deep learning framework for solving forward and inverse problems3 involving nonlinear partial differential equations". Journal of Computational Physics, 378, 686-707. Available at: https://doi.org/10.1016/j.jcp.2018.10.045 |
[16]
. Training PINNs can be time-consuming, and convergence is not always guaranteed. Optimizing hyperparameters often requires substantial time and effort
[16] | Raissi, M., Perdikaris, P., & Karniadakis, G. E. (2019). "Physics-informed neural networks: A deep learning framework for solving forward and inverse problems3 involving nonlinear partial differential equations". Journal of Computational Physics, 378, 686-707. Available at: https://doi.org/10.1016/j.jcp.2018.10.045 |
[16]
. They are also prone to vanishing gradient problems, particularly in deep networks, and integrating multiple objectives into the loss function can bias gradients
[16] | Raissi, M., Perdikaris, P., & Karniadakis, G. E. (2019). "Physics-informed neural networks: A deep learning framework for solving forward and inverse problems3 involving nonlinear partial differential equations". Journal of Computational Physics, 378, 686-707. Available at: https://doi.org/10.1016/j.jcp.2018.10.045 |
[16]
. While offering a mesh-free approach, conventional PINNs can still incur a computational burden due to the large number of collocation points required for pointwise approximation
[14] | Malashin I, Tynchenko V, Gantimurov A, Nelyub V, Borodulin A. Physics-Informed Neural Networks in Polymers: A Review. Polymers (Basel). 2025 Apr 19; 17(8): 1108. https://doi.org/10.3390/polym17081108 |
[14]
. Additionally, PINNs can face challenges in accurately learning high-frequency components of solutions
[19] | Chen, Hao, Gonzalo E. Constante-Flores and Canzhou Li. “Physics-Informed Neural Networks with Hard Linear Equality Constraints.” ArXiv abs/2402.07251 (2024): n. pag. https://doi.org/10.48550/arXiv.2402.07251 |
[19]
.
Table 1 shows the comparative analysis between ANFIS and PINN structures.
3. Synergistic Integration: The ANFIS-PINN System
3.1. Rationale for Hybridization: Complementary Strengths
The conceptualization of an ANFIS-PINN system is driven by the desire to transcend the limitations of individual ANFIS and PINN models by leveraging their distinct yet complementary strengths. This hybridization aims to create a more powerful, robust, and trustworthy AI system for complex scientific and engineering problems. One compelling reason for this integration is bridging interpretability and physics adherence. PINNs excel at embedding physical laws, leading to high accuracy and generalization, but they often remain "black-box" models, making their internal workings opaque. ANFIS, conversely, provides inherent interpretability through its fuzzy rulebase, which can be directly understood by humans. While ANFIS can sometimes face accuracy trade-offs, an ANFIS-PINN system could offer "interpretable physics-informed AI" by allowing the fuzzy logic to explain the physical behaviors or decisions made by the network, thereby overcoming the opacity of deep neural networks while retaining physical consistency. This integration could lead to a deeper understanding, specifically providing interpretable physical regimes. PINNs learn to satisfy governing differential equations (PDEs) globally, but the internal representation of
how these laws are satisfied across different physical regimes (e.g., laminar vs. turbulent flow, distinct phases of a chemical reaction) remains implicit within the neural network's weights. ANFIS, with its fuzzy rules, naturally segments the input space into "fuzzy regions" where specific rules apply. If these fuzzy regions can be aligned with different physical regimes or operating conditions (e.g., "IF temperature is HIGH AND pressure is LOW THEN reaction is FAST"), the ANFIS component could provide human-understandable explanations for the PINN's behavior in those specific physical contexts. This moves beyond simple "saliency maps" for interpretability to provide rule-based explanations of physical phenomena, which represents a higher level of understanding for domain experts. This advancement could lead to a new class of "explainable scientific AI" where not just predictions, but the underlying physical reasoning, is made transparent, a critical step for scientific discovery, engineering design optimization, and regulatory compliance. Another significant benefit is enhancing robustness in uncertain environments. PINNs reduce data dependency by incorporating physics, while ANFIS is adept at handling noisy data and inherent uncertainty through fuzzy logic. The combination would create a system exceptionally robust to real-world data imperfections and inherent system uncertainties. This robustness extends to quantifying and propagating physical uncertainty. PINNs, in their prevalent form, use soft constraints and "cannot strictly satisfy" physical constraints, implying a degree of approximation in their physical adherence. Fuzzy logic, as utilized in ANFIS, is specifically designed to model and propagate uncertainty. The concept of "Fuzzy Physics-Informed Neural Networks (fPINN)" explicitly addresses "uncertain fields" and "fuzzy partial differential equations"
[21] | Fuhg, J. N., Kalogeris, I., Fau, A., & Bouklas, N. (2022). "Interval and fuzzy physics-informed neural networks for uncertain fields". Probabilistic Engineering Mechanics, 5 103240. Available at: https://doi.org/10.1016/j.probengmech.2022.103240 |
[21]
. The fPINN approach, which employs multiple interval PINNs (iPINNs) for different α-cut levels, demonstrates how fuzzy sets can characterize the possibilistic uncertainty in inputs and outputs of physical systems
. An ANFIS-PINN could leverage ANFIS's fuzzy reasoning to not only handle noisy data but also to quantify and propagate the uncertainty associated with physical parameters or boundary conditions through the PINN's predictions. This would provide not just a point estimate but a fuzzy range of possible outcomes, invaluable for risk assessment and decision-making in uncertain physical systems. This moves towards "uncertainty-aware physics-informed AI," where the system provides a confidence interval or a fuzzy set of solutions, reflecting the inherent uncertainties in the physical system or input data, which is particularly relevant for high-stakes applications. Finally, the integration allows for combining data-driven learning with expert knowledge. ANFIS excels at incorporating expert knowledge through fuzzy IF-THEN rules, while PINNs integrate fundamental physics laws. The hybrid system could bridge the gap between purely data-driven models and purely physics-driven models, allowing for the incorporation of quantitative data, precise physical laws, and qualitative expert heuristics or "soft" physical understanding. This leads to adaptive knowledge integration and discovery. PINNs are powerful when physics laws are precisely known. However, in many real-world systems, the governing equations might be partially known, approximate, or even unknown in certain regimes. ANFIS can capture expert knowledge even when formal mathematical models are absent. The Perception-Informed Neural Networks (PrINNs) framework explicitly allows for "perception-based information" and "expert knowledge" to be integrated into neural networks, even for systems with "unknown physics laws". An ANFIS-PINN could use the ANFIS component to infer fuzzy rules from data or expert input, representing approximate physical relationships or behavioral heuristics. These fuzzy rules could then act as "soft" physical constraints or regularization terms in the PINN's loss function, even when a precise PDE is unavailable. Conversely, the PINN's learning could help refine the fuzzy rules or identify new, more precise relationships that were previously only qualitatively understood, leading to a synergistic discovery process where data, physics, and expert knowledge iteratively inform each other. This aligns with the idea of "discovering new physics" mentioned for PINNs
. This creates a "knowledge-evolving physics AI," where qualitative expert insights can be formalized, tested against data and approximate physics, and iteratively refined towards more precise physical models, marking a significant leap towards truly intelligent scientific modeling.
3.2. Conceptual Architecture of ANFIS-PINN
The integration of ANFIS and PINN can be conceptualized in several architectural paradigms, each leveraging their strengths at different points in the system. The overarching goal is to combine the adaptive, interpretable, and uncertainty-handling capabilities of ANFIS with the physics-adhering and data-efficient nature of PINN. Proposed Integration Points and Data Flow:
3.2.1. Sequential Integration (ANFIS as a Pre-processor/Feature Extractor for PINN)
In this configuration, ANFIS functions as a front-end, processing raw or high-dimensional input data. It would utilize its fuzzification and rule layers to extract meaningful fuzzy features or linguistic representations of the input state. This approach is analogous to the Deep Convolutional Neuro-Fuzzy Inferential Systems (DCNFIS) where a Convolutional Neural Network acts as a feature extractor for ANFIS. The defuzzified outputs or normalized firing strengths from ANFIS would then serve as inputs to the PINN. This process reduces the dimensionality of the input space for the PINN and potentially provides more robust, interpretable features that are less sensitive to noise, thereby benefiting the PINN's training.
Data Flow: Raw Data → ANFIS (Fuzzification, Rule, Normalization, Defuzzification) → Interpretable Fuzzy Features → PINN (Neural Network + Physics Loss) → Physically Consistent Output.
3.2.2. Sequential Integration (PINN as a Pre-processor/Physics-informed Feature Extractor for ANFIS)
Conversely, a PINN could serve as the initial layer, learning a low-dimensional, physically consistent representation of the system dynamics. The PINN would be trained with both data and physics loss terms, ensuring that its internal states or intermediate outputs adhere to the governing equations. These physically-informed latent features or predicted states from the PINN could then be fed into an ANFIS. The ANFIS would subsequently use these refined, physics-consistent features to generate interpretable fuzzy rules and make final predictions or control decisions, providing explainability to the PINN's output. This approach could be particularly useful for control systems where interpretable decision-making based on physical states is critical.
Data Flow: Raw Data → PINN (Neural Network + Physics Loss) → Physically Consistent Latent Features/Predictions → ANFIS (Fuzzification, Rule, Normalization, Defuzzification) → Interpretable Output.
3.2.3. Parallel/Hybrid Integration (ANFIS and PINN in a Unified Loss Function)
This represents the most direct form of integration, where the ANFIS component directly influences the PINN's training through a modified loss function. This aligns with the concept of Fuzzy-Informed Neural Networks (FINNs) within the broader Perception-Informed Neural Networks (PrINNs) framework
. The loss function would include terms for data adherence, physical law adherence (PINN's core objective), and crucially, fuzzy rule adherence. The fuzzy rule adherence term would penalize deviations from desired fuzzy relationships or expert heuristics. Instead of strictly defined differential equations, some physical constraints could be expressed as fuzzy rules. For example, a rule might be "IF velocity is HIGH THEN air resistance is VERY HIGH." These fuzzy rules, derived from expert knowledge or learned from data, could be incorporated into the loss function as "fuzzy residuals." The concept of fPINN
[21] | Fuhg, J. N., Kalogeris, I., Fau, A., & Bouklas, N. (2022). "Interval and fuzzy physics-informed neural networks for uncertain fields". Probabilistic Engineering Mechanics, 5 103240. Available at: https://doi.org/10.1016/j.probengmech.2022.103240 |
[21]
and FcINNs
directly demonstrates this by solving fuzzy partial differential equations or incorporating fuzzy differential equations into the loss function. This allows for the modeling of systems with inherent uncertainty in their physical parameters or governing laws. The ANFIS component could define the membership functions and fuzzy rules that represent these "fuzzy physical laws," and their violation would contribute to the overall loss, guiding the neural network towards physically plausible and interpretable solutions. This could potentially lead to "harder" fuzzy constraints than typical PINN soft constraints
.
Data Flow: Raw Data → Hybrid ANFIS-PINN Network (single or modular neural network architecture) → Output. Training involves a composite loss:
(1)
3.3. Proposed Implementation Strategies
Implementing an ANFIS-PINN system requires careful consideration of how to merge their distinct mechanisms. Several strategies can be pursued:
3.3.1. Loss Function Integration
Fuzzy Regularization of Physical Laws. The most promising avenue for deep integration involves modifying the PINN's loss function to incorporate terms derived from ANFIS's fuzzy logic. This mechanism would involve defining a fuzzy loss component (L
fuzzy) that quantifies the degree to which the network's predictions violate a set of fuzzy physical rules or expert-defined constraints. For instance, if a physical quantity X is expected to be "Low" when Y is "High," L
fuzzy would increase if the network predicts X to be "Medium" or "High" under "High" Y. This approach aligns with the concept of "fuzzy residuals" in differential equations, as explored in fPINNs
[21] | Fuhg, J. N., Kalogeris, I., Fau, A., & Bouklas, N. (2022). "Interval and fuzzy physics-informed neural networks for uncertain fields". Probabilistic Engineering Mechanics, 5 103240. Available at: https://doi.org/10.1016/j.probengmech.2022.103240 |
[21]
, where fuzzy numbers or functions are directly used within the equations. The overall loss function would then aim to minimize the "fuzziness" or uncertainty of these residuals. Furthermore, the "dual weighting mechanism" from ADAR
could be adapted for this hybrid system. An ANFIS-like component could dynamically adjust the weights (λ
1, λ
2) of the data, physics, and fuzzy loss terms during training. This adaptive weighting would allow the system to prioritize adherence to physical laws or fuzzy constraints when data is sparse, or to relax them when data is abundant but potentially noisy. This strategy could effectively address the issue of "biased gradients" that can arise in PINNs when integrating multiple objectives into the loss function
[16] | Raissi, M., Perdikaris, P., & Karniadakis, G. E. (2019). "Physics-informed neural networks: A deep learning framework for solving forward and inverse problems3 involving nonlinear partial differential equations". Journal of Computational Physics, 378, 686-707. Available at: https://doi.org/10.1016/j.jcp.2018.10.045 |
[16]
.
3.3.2. Hybrid Network Architectures
Modular and Interconnected Designs. Various architectural designs can facilitate the integration:
i. Modular ANFIS-PINN
1) ANFIS as a Rule-Based Controller/Corrector for PINN: A pre-trained or adaptively trained ANFIS could continuously monitor the PINN's output for physical inconsistencies or deviations from fuzzy expert rules. If such inconsistencies are detected, the ANFIS could provide corrective feedback or adjust certain parameters of the PINN. This functions as an automated "human-in-the-loop" system
[9] | Mantalas, E.-M.; Sagias, V. D.; Zacharia, P.; Stergiou, C. I. Neuro-Fuzzy Model Evaluation for Enhanced Prediction of Mechanical Properties in AM Specimens. Appl. Sci. 2025, 15, 7. https://doi.org/10.3390/app15010007 |
[10] | Barbosa A. et al. A Hybrid AI-Based Risk Assessment Framework for Sustainable Construction: Integrating ANN, Fuzzy Logic, and IoT, (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 16, No. 3, 2025, pp. 46-56. https://doi.org/10.14569/IJACSA.2025.0160305 |
[11] | Rahman, M. S.; Ali, M. H. Adaptive Neuro Fuzzy Inference System (ANFIS)-Based Control for Solving the Misalignment Problem in Vehicle-to-Vehicle Dynamic Wireless Charging Systems. Electronics 2025, 14, 507. https://doi.org/10.3390/electronics14030507 |
[9-11]
, but with the fuzzy logic component providing the automated corrective intelligence.
2) PINN for High-Fidelity Simulation, ANFIS for Decision/Control: In this setup, the PINN would handle the underlying complex physical simulation, leveraging its strengths in solving differential equations. The outputs from the PINN would then be fed into a separate ANFIS, which would use its fuzzy logic to make interpretable decisions or control actions, similar to its established applications in control systems.
ii. Deep Neuro-Fuzzy Integration (Inspired by DCNFIS/FINNs)
1) Embedding Fuzzy Logic within Deep PINN Layers: Rather than having a completely separate ANFIS module, fuzzy logic operations (e.g., fuzzification, rule inference) could be integrated as custom layers or activation functions directly within the deep neural network architecture that forms the PINN. This approach would make the internal representations of the PINN more interpretable and enable end-to-end training of the entire hybrid system.
2) FINNs as a Precedent: The Fuzzy-Informed Neural Networks (FINNs) concept
provides a strong precedent for this. FINNs embed fuzzy logic constraints directly into a deep learning architecture's loss function without requiring defuzzification or fixed ANFIS-like structures. This allows for the development of arbitrarily deep or wide networks that are still "fuzzy-informed," combining the flexibility of deep learning with the interpretability of fuzzy logic.
3.3.3. Adaptive Parameter Optimization via Fuzzy Logic
Leveraging ANFIS's adaptive learning capabilities could enable the optimization of PINN-specific parameters, such as the weighting coefficients (λ) in the loss function, or even the network's hyperparameters (e.g., learning rate, number of layers/neurons). An outer ANFIS controller could observe the training progress (e.g., convergence of data loss versus physics loss, physical consistency metrics) and dynamically adjust the PINN's training parameters using fuzzy rules (e.g., "IF physics loss is HIGH AND data loss is LOW THEN INCREASE physics loss weight SLIGHTLY"). This adaptive approach could help address the limitation that "optimizing hyper-parameters requires a lot of time and effort" in PINNs, and potentially mitigate issues like getting trapped in local optima.
3.3.4. Uncertainty Quantification and Explainability Enhancement
The ANFIS component can naturally provide fuzzy outputs, representing degrees of certainty or possibility, rather than crisp point predictions. When combined with PINN, this would allow the system to quantify the uncertainty in its physically-informed predictions, providing a more complete picture for decision-makers, especially in scenarios involving imprecise data or inherent system variability. Furthermore, the fuzzy rules learned by the ANFIS component can serve as direct, human-interpretable explanations for the hybrid system's behavior, particularly concerning how it adheres to or deviates from physical principles. This directly addresses the "black-box" nature of traditional PINNs. Techniques like saliency maps derived from fuzzy rules, as demonstrated in DCNFIS
[5] | Maathuis, C. and Scharringa E. “Hybrid AI Model for Proportionality Assessment in Military Operations.” International Conference on Cyber Warfare and Security (2025): n. pag. |
[5]
, could also be employed to visually explain the system's focus.
Table 2 shows the proposed ANFIS-PINN integration paradigms.