Modular Square One-Way Function & Square Root Algorithm (Part-2): AI practical approaches for applying the paper results in GPT


Authors : Ahmed Mohammed Al-Fahdi

Volume/Issue : Volume 10 - 2025, Issue 2 - February


Google Scholar : https://tinyurl.com/2s4hbjex

Scribd : https://tinyurl.com/29vkuuex

DOI : https://doi.org/10.38124/ijisrt/25feb1287

Google Scholar

Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.

Note : Google Scholar may take 15 to 20 days to display the article.


Abstract : This paper is built upon a previous paper entitled as “Modular Square One-Way Function& Square Root Algorithm: Analyzing the algorithm for randomness, regularity schematic (codec system) and vector normalization “. In that paper the modular square one-way function was analyzed yields the quadratic residue pattern numeric analyzation in the result section. Analyzing the integer factorization results leads to un expected schematic regularity regarding the irrational part of the remainder (decimal expansion) of nonperfect square root. Such regularity was surprising as the expected results assumed to be random. Rounding such rational numbers and normalizing it yield to what is innovatively called modular factor symbol similar to Legendre symbol. Such codec pattern has characteristics of Hilbert envelope, skewness around perfect root pattern with Hann window. In GPU, such calculations could be computed fastly using IEEE- 754 [1] standard for rounding irrational part of the nonperfect square (decimal expansion) with floating point as what mentioned in inverse square root [1]. All above, illuminating an idea of the statistical analyzation for the root mean square error (RMSE). RSME is a powerful estimator of the prediction models used in the artificial intelligence AI especially for the reinforcement learning (RL). As a new approach in AI Google DeepMind Researchers looking through regression analysis algorithm tuning and representing the numerical values as discrete tokens for large language model (LLM). Such data set tokenization and tuning algorithm are helpful for the speed and the predictability of the model as it hase been recognized in the Deep Seek.[4] . Up on all above and considering AI as a new evaluation approach, this paper will discuss the implementation aspects of such innovative results in sampling, tokenizing, clustering and compressing the base model of the GPT a long with fine tuning Neural Network (NN) reasoning of the Reinforcement model.

Keywords : AI, tokenization, Riemann Integral, Hilbert Envelope, Hann Window, Skewness Generative Pre-Trained Transformer (GPT), Auto-Regressive Transformer, Base Model, Reward Modeling, RPE, Vision-Based Estimation, SL, Sequence-Based Estimation, Temporal Difference (TD), Root Mean Square Error (RMSE), Statistical Analysis ,Numerical Analysis, Regression Analysis, Pigeonhole Clustering, Matryoshka Structure, Reinforcement Learning RL, RSME Estimator, Floating Point IEEE- 754,Auto-Regression Tuning, Parameter Tuning Of Decoder Unit, Vector Normalization, Normalized Token, Minimum Value Estimator (MVE), Statistical Estimation (Mean, Median),Histogram Based Regression.

References :

  1. Ahmed ALfahdi, “Modular Square one-way function and square root algorithm” , June 2024
  2. DeepSeek-AI ” DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model, June 2024
  3. DeepSeek-AI “Incentivizing Reasoning Capability in LLMs via Reinforcement Learning” ,Jan 2025
  4.  Qwen Team, Alibaba Group “Qwen 2.5-VL Technical Report” Qwen, China ,Feb 2025
  5. Google DeepMind “Decoding-based Regression” Song, Bahri :Equal cont , 31 Jan 2025.
  6. Google DeepMind “Matryoshka Quantization” Nair, Datta, Dean,Jian,Kusupati :Equal cont, Feb 2025.
  7.  Stanford University”The pigeonhole principle lecures 7&8” https//webs.stanford.edu/class/archive/
  8. UC Davis Math “The Riemann integral “ https: // www.math.ucdavis.edu & Riemann Sum and Riemann Integral Explained ,Jan 2020.
  9. Chai ,Draxler “Root mean square error (RMSE) or mean absolute error (MAE) argument a gainst avoiding RMSE in the literature” May 2014.
  10. Andrei Seymour-Howell,Fast inverse square-root program,2021
  11. Olumide “Root Mean Square Error (RMSE) in AI: What You Need To Know” August 2023.

This paper is built upon a previous paper entitled as “Modular Square One-Way Function& Square Root Algorithm: Analyzing the algorithm for randomness, regularity schematic (codec system) and vector normalization “. In that paper the modular square one-way function was analyzed yields the quadratic residue pattern numeric analyzation in the result section. Analyzing the integer factorization results leads to un expected schematic regularity regarding the irrational part of the remainder (decimal expansion) of nonperfect square root. Such regularity was surprising as the expected results assumed to be random. Rounding such rational numbers and normalizing it yield to what is innovatively called modular factor symbol similar to Legendre symbol. Such codec pattern has characteristics of Hilbert envelope, skewness around perfect root pattern with Hann window. In GPU, such calculations could be computed fastly using IEEE- 754 [1] standard for rounding irrational part of the nonperfect square (decimal expansion) with floating point as what mentioned in inverse square root [1]. All above, illuminating an idea of the statistical analyzation for the root mean square error (RMSE). RSME is a powerful estimator of the prediction models used in the artificial intelligence AI especially for the reinforcement learning (RL). As a new approach in AI Google DeepMind Researchers looking through regression analysis algorithm tuning and representing the numerical values as discrete tokens for large language model (LLM). Such data set tokenization and tuning algorithm are helpful for the speed and the predictability of the model as it hase been recognized in the Deep Seek.[4] . Up on all above and considering AI as a new evaluation approach, this paper will discuss the implementation aspects of such innovative results in sampling, tokenizing, clustering and compressing the base model of the GPT a long with fine tuning Neural Network (NN) reasoning of the Reinforcement model.

Keywords : AI, tokenization, Riemann Integral, Hilbert Envelope, Hann Window, Skewness Generative Pre-Trained Transformer (GPT), Auto-Regressive Transformer, Base Model, Reward Modeling, RPE, Vision-Based Estimation, SL, Sequence-Based Estimation, Temporal Difference (TD), Root Mean Square Error (RMSE), Statistical Analysis ,Numerical Analysis, Regression Analysis, Pigeonhole Clustering, Matryoshka Structure, Reinforcement Learning RL, RSME Estimator, Floating Point IEEE- 754,Auto-Regression Tuning, Parameter Tuning Of Decoder Unit, Vector Normalization, Normalized Token, Minimum Value Estimator (MVE), Statistical Estimation (Mean, Median),Histogram Based Regression.

Never miss an update from Papermashup

Get notified about the latest tutorials and downloads.

Subscribe by Email

Get alerts directly into your inbox after each post and stay updated.
Subscribe
OR

Subscribe by RSS

Add our RSS to your feedreader to get regular updates from us.
Subscribe