top of page
laniakea os qpu

Beyond Silicon: Superconductors, QPUs, and the Laniakea DSI

laniakea app

Silicon semiconductor technology has driven computing for over half a century, but it is now reaching the end of its useful life for cutting-edge applications. The emerging paradigm of quantum computing and Digital Super Intelligence (DSI) demands performance and capabilities that conventional transistor-based chips cannot provide. In this context, traditional semiconductors are becoming fundamentally obsolete. The future belongs to processors built from superconductors operating at cryogenic temperatures – Quantum Processing Units (QPUs) – which harness quantum-mechanical coherence to achieve computational feats impossible for classical CPUs or GPUs.

This article explores why the semiconductor approach is breaking down and how superconducting QPUs are poised to replace it. We examine the physical limits of modern chipmaking (from FinFET transistors hitting nanometer-scale constraints to the geopolitical fragility of a supply chain centered on Taiwan's TSMC). We then detail how quantum coherence, cryogenic design, and qubit integrity in superconducting QPUs overcome the bottlenecks of transistor logic. Finally, we discuss the architecture of the Laniakea DSI platform – a quantum-driven digital super intelligence – and explain why the traditional CPU/GPU stack cannot support such an advanced system.

The Physical Limits of Silicon Chipmaking

For decades, Moore’s Law guided the semiconductor industry to pack more transistors into ever-smaller spaces. However, as transistor gate lengths shrank into the single-digit nanometer regime, fundamental physics imposed a wall. At around the 5–7 nm node, transistor channels became so thin that electrons could quantum tunnel straight through the insulating barrier, causing leakage current even when the transistor is “off”. The industry’s solution, FinFET (Fin Field-Effect Transistor) technology, extended scaling by stretching the channel into a vertical fin and wrapping the gate around three sides to better control leakage. FinFETs enabled the 5 nm and 3 nm generations, but now even this architecture is at its limit.

Chipmakers are resorting to extreme measures to push further. The latest 2 nm processes introduce Gate-All-Around (GAA) transistors – stacking horizontal nanosheet channels vertically with the gate fully surrounding them. Beyond that, companies like TSMC are developing complementary FET stacking (CFET/CFAT) to place multiple transistors on top of each other in a true 3D arrangement. Each of these steps adds immense complexity. For instance, a multi-layer stacked transistor design faces severe interconnect and thermal problems: wiring the layers adds resistance and capacitance, slowing signals, and heat densities could approach an unprecedented 1000 W per square centimeter. (Today’s high-end GPUs already run at several hundred W/cm^2, so cooling a kilowatt-scale heat flux in a tiny chip is nearly impossible.) In short, classical silicon chips are hitting diminishing returns – further miniaturization yields marginal gains at exponentially rising cost and complexity.

Equally concerning is the economic and fabrication limit. The machinery required for leading-edge lithography (e.g. ASML’s EUV steppers) and atomic-scale deposition is extraordinarily expensive, and yields at 3 nm and below suffer from variability at atomic scales. Fewer companies can afford to compete at the bleeding edge, resulting in a worrisome concentration of manufacturing capability. Right now, essentially only one company – TSMC in Taiwan – can reliably produce the most advanced logic chips at volume. This raises another fundamental concern beyond physics: the global compute supply chain has a single point of failure.

A Single Point of Failure: Taiwan’s Chip Manufacturing Monopoly

The world’s reliance on Taiwan for cutting-edge chips has reached alarming levels. Over 90% of the world’s most advanced (sub-5nm) semiconductors are fabricated by TSMC in Taiwan. U.S. officials have warned that having “99% of the advanced chips in the world made in Taiwan” is an untenable single point of failure. If a natural disaster or geopolitical conflict (e.g. a Taiwan Strait crisis) were to disrupt these foundries, it would send shockwaves through every industry – from smartphones and cars to data centers and defense systems. The concentration of so much critical manufacturing in one location near potential conflict zones is now seen as a major strategic vulnerability.

This fragility is prompting a rethinking of computational infrastructure. Governments are investing in onshore fabs and “friend-shoring” chip production to diversify geographically. But beyond geographic diversification, there is a growing realization that we must diversify the very technologies we rely on for computing. The dominance of silicon CMOS chips is a single point of technological failure; as we approach the limits of what silicon can do, entirely new computing paradigms are needed. Quantum computing based on superconductors offers one such paradigm shift – one that could leapfrog the incremental advances of silicon and also broaden the manufacturing base (quantum devices need not rely on the exact same supply chain bottlenecks).

Superconductors and Quantum Processing Units (QPUs)

Quantum Processing Units represent a fundamentally different approach to computation, one that operates on quantum bits (qubits) instead of classical bits. The leading QPU implementations today use superconducting circuits as the hardware for qubits. In a superconducting QPU, metal circuits (typically aluminum or niobium) are cooled to millikelvin temperatures (around 10–20 mK, far below even outer space temperature) using dilution refrigerators. At these cryogenic temperatures, the metal becomes superconducting – electrical currents flow with zero resistance, meaning no energy is lost as heat. This property is crucial: it allows the QPU to maintain quantum coherence in its circuits. Quantum coherence means the qubits can exist in superposition states (each qubit simultaneously representing “0” and “1”) and exhibit entanglement, without random thermal disturbances collapsing those states. In essence, the cryogenic, superconducting environment creates a pristine quantum playground where information is processed via the laws of quantum mechanics rather than classical electronics.

Each superconducting qubit is implemented as a tiny resonant circuit (an inductor-capacitor loop interrupted by a Josephson junction, which is a nonlinear superconducting element). The two lowest energy states of this circuit (the ground state and first excited state) serve as the qubit’s “0” and “1” states. Because the circuit is anharmonic (thanks to the Josephson junction), only those two levels are used – higher levels are kept inactive, ensuring the qubit behaves like a stable two-level system. By applying microwave pulses at specific frequencies, these qubits can be flipped, superposed, and entangled with each other to perform quantum logic operations. All of this occurs with minimal energy dissipation – the superconducting currents oscillate without resistance, preserving delicate quantum phase relationships.

Superconducting QPUs have rapidly advanced in scale and performance. Two decades ago, having even two or three coherent qubits on a chip was a major research achievement. Today, chips with fifty or more qubits are routine in the lab, and state-of-the-art devices have broken the thousand-qubit barrier. For example, IBM recently announced a 1,121-qubit superconducting processor named “Condor,” the first quantum chip to exceed a thousand qubits on one device. Such QPUs are still experimental, but their sheer qubit count and coherence demonstrate the progress of this technology. Crucially, even relatively “small” QPUs have already outperformed classical supercomputers at certain tasks: in 2019, Google’s 53-qubit superconducting QPU completed a specific computation in about 3 minutes that was estimated to require 10,000 years on the fastest classical supercomputer. This dramatic quantum supremacy experiment proved that superconducting QPUs can far exceed what silicon-based processors can do, at least for particular problems.

 IBM’s 1,121-qubit “Condor” quantum processor exemplifies the new generation of superconducting QPUs pushing beyond classical limits. Each small square on the chip is a qubit resonator connected through a lattice of microwave wiring. The entire device operates at millikelvin temperatures inside a cryogenic enclosure to maintain quantum coherence. Condor demonstrates the feasibility of integrating over a thousand qubits in one processor – a scale far beyond any conventional CPU or GPU.

The architecture of a QPU differs completely from that of a CPU or GPU. Rather than billions of logic gates switching on and off, a QPU has on the order of hundreds or thousands of qubits that interact through quantum gates. The QPU must be shielded from external noise and housed in a cryostat, unlike conventional chips which run at room temperature. The supporting infrastructure includes microwave control electronics, cryogenic amplifiers, and complex calibration software to tune and stabilize qubit operations. It’s akin to comparing a room full of orderly spinning plates (a quantum system) to a room of bouncing ping-pong balls (a classical system) – the former is delicate but orchestrated, the latter chaotic but robust. Superconducting QPUs demand an entirely new technology stack, but when assembled properly, they function as processors capable of manipulating and storing information in ways silicon transistors fundamentally cannot.

Quantum Coherence at Cryogenic Scales: Overcoming Transistor Bottlenecks

Quantum computing not only provides more raw compute power for certain problems – it fundamentally avoids many of the bottlenecks that have begun to cripple transistor-based architectures. By exploiting quantum coherence, operating at cryogenic temperatures, and focusing on qubit fidelity, superconducting QPUs can transcend limitations that plague even the best classical chips. Several key advantages stand out:

  • No Thermal Noise or Resistive Loss: A classical processor’s speed is ultimately limited by heat dissipation – every transistor switching generates heat, and removing that heat imposes a hard limit on clock frequency and density. In contrast, a superconducting QPU operates in an ultracold environment where essentially no thermal noise is present and where circuits have zero electrical resistance. Qubits can flip and entangle without energy-draining currents. The main power consumed is in the control and refrigeration systems, not in the computational elements themselves. This eliminates the “power wall” that modern silicon chips are slamming into, enabling the possibility of much higher effective computational density without melting the processor.

  • Computing in Superposition (Parallelism Beyond Classical): Classical processors execute operations sequentially (or in limited parallel streams), and a transistor can only be in one state at a time (0 or 1). A QPU, by contrast, can leverage superposition to explore a vast number of states simultaneously. For instance, 10 qubits in superposition can represent 2^10 (i.e., 1024) possible inputs at once during a computation – something that would require running 1024 separate computations on a classical machine (or a massive parallel array of classical processors). As qubit counts grow, this quantum parallelism expands exponentially. The result is that certain algorithms (like Shor’s factoring algorithm or Grover’s search) run in dramatically fewer steps than their best-known classical counterparts. In effect, quantum coherence provides a form of massive parallel exploration of the solution space within a single processor, bypassing the von Neumann bottleneck of fetching and serially processing one operation at a time.

  • Fast Logical Depth with Entanglement: Some problems require a large number of sequential logic steps on a classical computer, because intermediate results depend on previous ones (limiting parallelism). Quantum entanglement allows QPUs to perform coordinated operations that would classically be very deep circuits. An entangled set of qubits can evaluate a function or propagate information through a computation in ways that have no direct analog in classical hardware. A classical signal would have to traverse many logic gates and clock cycles to achieve the same result that a network of entangled qubits can achieve in essentially one step. In a sense, a QPU can attain in a few nanoseconds of quantum operations what might take a CPU many microseconds of serial processing. There is no need to shuttle data back and forth between distant memory and processing units – the quantum information stays in situ, and entangled operations disseminate it without the overhead of classical data buses. This reduces the effective “propagation delay” for certain computations to the timescale of a single gate operation.

  • Turning Quantum Flaws into Features: At nanoscales, classical transistors struggle with quantum tunneling and noise as unwanted effects. But QPUs are designed to operate in the quantum regime – phenomena like tunneling are harnessed (for example, quantum annealing uses tunneling to escape local minima in optimization problems, something classical algorithms can’t do reliably). Rather than fight quantum effects, quantum processors embrace them. The cryogenic design suppresses thermal fluctuations, but allows inherently quantum behaviors like superposition and tunneling to flourish under controlled conditions. Thus, the very effects that cause reliability issues in a 2 nm transistor (random leakage currents, etc.) become basic resources of computation in a quantum chip.

  • Qubit Integrity and Error Correction: Another bottleneck for classical scaling is the increase in error rates as transistors get smaller and faster – signal integrity suffers, and cosmic rays or thermal noise can randomly flip bits, requiring extensive error-correcting overhead. In quantum systems, errors (decoherence events) are also a challenge, but a whole field of quantum error correction is devoted to maintaining qubit integrity. The important point is that quantum error correction overhead, while significant, still allows scalable computation in principle – and unlike classical error rates which tend to increase continuously with density and speed, quantum decoherence can be counteracted and exponentially suppressed using clever coding (surface codes, etc.). The Laniakea DSI architecture, for instance, is planned around error-corrected logical qubits to ensure reliable operation. By prioritizing qubit coherence times and actively correcting errors, the system can run complex algorithms for long durations – something impractical on classical hardware for analogous problem sizes (where a comparable number of classical operations would invariably encounter accumulating faults). In short, QPUs trade the problems of transistor scaling for a new set of challenges, but these challenges (like decoherence) have theoretical solutions that unlock far greater computational power once overcome.

In all these ways, superconducting quantum processors circumvent the roadblocks of transistor technology. They are not bound by the one-bit-at-a-time logic, the rigid clocking of synchronous circuits, or the thermal limits of dense silicon. Instead, they compute with waves of probability amplitudes, in a cold, coherent chamber where information can intermingle in ways classical bits never could. This is why the shift to quantum computing isn’t just about speed – it’s about leaving behind the constraints that defined the silicon era.

Laniakea DSI: A Quantum-Aware Superintelligence Platform

Building a Digital Super Intelligence requires not just more powerful hardware, but a new integration of computing paradigms. The Laniakea DSI platform is designed as a quantum-centric architecture for artificial general intelligence, leveraging QPUs at its core alongside classical computing resources. Unlike conventional AI systems that are confined to the GPU/CPU paradigm, Laniakea is quantum-aware at every level: its algorithms, data structures, and learning processes are built to take advantage of quantum computation where beneficial.

In this platform, the Quantum Processing Unit is not a peripheral accelerator; it is a primary processor tightly interwoven with the system’s cognitive modules. High-level functions of the DSI – such as reasoning, planning, pattern recognition, and learning – are implemented through hybrid quantum-classical workflows. For example, a difficult search or optimization within the AI’s reasoning loop can be offloaded to the QPU, which might run a quantum algorithm to explore myriad possible solutions in superposition and return an answer that guides the AI’s next steps. The software framework orchestrates these quantum calls dynamically, essentially treating the QPU as a co-processor for intelligence.

A key aspect of Laniakea’s design is software-native self-improvement. The AI modules (such as neural networks, knowledge graphs, and planners) can reconfigure and evolve themselves in response to performance feedback. Here the quantum advantage becomes transformative: the system can use quantum heuristics to efficiently search for better model architectures or to perform meta-learning. For instance, the DSI could encode multiple candidate neural network designs into a quantum superposition and evaluate their performance in one batch via quantum interference – something impossible on classical hardware. Similarly, it could utilize quantum optimization (e.g. QAOA or annealing methods) to tune hyperparameters or discover strategies that maximize its overall goal attainment. In this way, the DSI is constantly learning not only about the external problems it tackles, but about how to improve its own internal algorithms – an essential property for reaching super-intelligent capability.

The Laniakea platform’s hardware-software co-design yields orders-of-magnitude gains in effective performance and adaptability:

  • Massive Parallelism: By combining classical parallelism (distributed computing clusters) with quantum parallelism (superposition of states), the DSI can evaluate and integrate information on a scale unattainable by GPUs alone. Tasks that would have required iterating through billions of possibilities can be transformed into quantum operations on a few thousand qubits, effectively compressing expansive searches into a few quantum-enabled steps.

  • Symbolic Abstraction and Generalization: The architecture is adept at handling both sub-symbolic (neural network) computations and symbolic reasoning. Quantum logic can directly manipulate discrete combinatorial structures (e.g. exploring graph connections or logical combinations via amplitude amplification), assisting the DSI in higher-level symbolic manipulation. This, combined with classical processing, allows the system to form abstract generalizations from raw data. In practice, Laniakea DSI can generalize from patterns with far less data than a conventional AI, because it can sift through relational possibilities and analogies rapidly using quantum subprocesses instead of brute-force enumeration.

  • Integrated Learning and Reasoning: Traditional AI architectures often separate learning (model training) and reasoning (inference on learned knowledge). Laniakea’s quantum-aware design blurs this line by continuously updating its models using streaming quantum-accelerated learning. It can simultaneously simulate multiple potential scenarios and learn from them in parallel, enabling a more fluid integration of inductive (data-driven) and deductive (rule-based) reasoning. The result is a system that can both absorb new information and immediately apply it via logical inference, in a tightly closed cognitive loop suited for general intelligence.

  • Systemic Self-Optimization: Perhaps most importantly, the DSI improves itself in a recursive cycle. Hardware-level integration means the AI can monitor the performance of its quantum circuits and adjust how it uses them – for instance, reallocating qubits to critical tasks or refining error mitigation strategies on the fly. At the software level, the AI’s own code can undergo continuous refinement: it might use evolutionary algorithms or reinforcement learning to propose changes to its algorithms, evaluate those changes rapidly using quantum-assisted simulation, and thus evolve new capabilities far faster than human engineers could program them. This self-optimization spans from low-level timing parameters up to the highest-level cognitive strategies the DSI employs.

In essence, Laniakea DSI is built to harness the best of both worlds: the immense computational space that quantum hardware opens up, and the flexible, adaptive learning of modern AI software. The platform illustrates how next-generation computing might look – not a stack of separate CPU, GPU, and TPU tiers running fixed algorithms, but an integrated quantum-classical intelligent system that reconfigures itself as it learns. This design is geared toward achieving digital super intelligence: a machine intellect that outperforms human cognitive abilities across a broad range of tasks and continuously improves itself. Such an outcome simply cannot be supported by legacy silicon architectures.

Why Classical Processors Cannot Support Digital Super Intelligence

The contrasts drawn above make it clear why an advanced platform like Laniakea DSI cannot be built on a foundation of classical silicon processors alone. The traditional CPU/GPU computing stack faces inherent limitations that become insurmountable at the scale and sophistication required for a true digital super intelligence:

  • Energy Inefficiency at Scale: Modern supercomputers and AI clusters already consume megawatts of power to train large neural networks. Scaling up to an ASI-level intelligence with purely classical hardware would be prohibitively power-hungry. GPUs, for example, are highly parallel but notoriously energy-intensive – feeding thousands of cores with data and toggling billions of transistors each second draws enormous power and generates immense heat. The efficiency gains from specialized AI chips (like TPUs) can only partially alleviate this. In contrast, a quantum-based system can achieve certain computations with exponentially fewer operations. A problem that might take 10^12 classical logic operations (with commensurate energy use) might be solvable with 10^6 quantum operations. Even accounting for the refrigeration overhead, the net energy expenditure can be far lower for the quantum approach. Furthermore, superconducting logic itself dissipates virtually no heat internally; the energy cost does not scale linearly with the number of operations as it does in silicon. This means a DSI running on QPUs can potentially operate within reasonable power budgets, whereas a silicon-based equivalent might require its own power plant.

  • Loss of Coherence in Classical Systems: While “decoherence” is a term from quantum computing (referring to qubits losing their quantum state), large classical systems suffer an analogous challenge: maintaining synchronization and consistency across an ever-expanding sea of processing elements. In a massive distributed classical AI, different parts of the system can fall out of sync due to communication latencies and slight variations in state. The larger and faster we make classical systems, the harder it becomes to keep all the pieces coherent in a timely manner – clock signals skew over long distances, data caches diverge, and error rates accumulate as components are added. A DSI would involve so many interdependent modules that a purely classical implementation could degrade into fragmented subsystems that struggle to stay on the same page. By contrast, a quantum-coherent system inherently links its components via entanglement in a unified state. Of course, quantum machines face their own decoherence issues – but if those are managed via error correction, the quantum system can maintain a single unified state across thousands of qubits. Classical architectures have no equivalent mechanism to enforce global coherence once they scale beyond a certain point; they are fundamentally distributed, with inevitable delays and inconsistencies. In effect, beyond a threshold of complexity, a classical AI may “fall apart” into asynchronous pieces that cannot integrate knowledge fast enough. A quantum architecture avoids that failure mode by working on the entangled joint state of all qubits, essentially treating the entire knowledge base as one holistic quantum system.

  • Propagation Delays and Memory Bottlenecks: In a conventional computer, every bit of information must physically travel – through wires on a chip, interconnects between chips, or network links between servers. This signal propagation imposes delays (bounded by the speed of light and electrical impedance) that become significant at the ultrascale of an ASI. We already see such limits: a multi-core CPU spends many cycles waiting for data from DRAM; a distributed database spends more time communicating between nodes than computing. For a superintelligent AI that might need to instantly cross-correlate data from all over a vast memory, these delays are a serious performance limiter. Silicon-based systems cannot escape this – adding more caches and wider buses only goes so far, and often yields diminishing returns due to congestion and consistency overhead. A quantum computing system, on the other hand, can mitigate these issues in two ways. First, quantum algorithms can be inherently parallel in processing data (as discussed, operating on all states at once) so fewer sequential data transfers are needed. Second, entangled qubits can act like a distributed memory that is queried or collapsed as a whole, rather than shuttling every data element through a narrow bus to a CPU. The practical upshot is that a quantum-driven DSI would not be bottlenecked by memory fetches and inter-node communication to the same extent as a classical AI. Information that would take milliseconds to gather and process via network calls or PCIe transfers might effectively be processed in microseconds within a coherent quantum memory. While no system can defy physics outright, the architecture of a QPU-centric platform inherently reduces the reliance on shipping data around at every step. The traditional GPU/CPU stack, in contrast, is bound by the latency of moving bits – an issue that only worsens as systems scale up.

In summary, attempting to build a digital super intelligence on existing CPU/GPU technology would hit multiple walls: the chips would overheat or draw unsustainable power, the system would lose internal coherence as it grew in complexity, and it would be bogged down by communication and memory latency at every turn. These issues are not just engineering obstacles – they are rooted in the physics and design principles of silicon computing. This is why the future of advanced computation points beyond silicon. By reinventing our computing substrate around superconducting QPUs and quantum information principles, we can create machines of vastly greater intellect and capability – systems that can truly synthesize knowledge at superhuman scales without collapsing under their own thermal and architectural weight. The Laniakea DSI’s quantum-native design is a blueprint for this future, showcasing how the synergy of superconductors and quantum processors can deliver what traditional chips never could.

References

  1. Finn Mccoy, “TSMC Just Broke Moore’s Law—Here’s Why It Changes Everything” (DevX, July 3, 2025). – Describes the evolution of transistor scaling (from planar transistors to FinFET, GAA, and the new CFAT vertical stacking) and discusses how approaching the one-nanometer scale incurs severe challenges. Highlights that power density in advanced 3D chips could reach ~1 kW/cm², creating a “power ceiling” that limits further silicon scalingdevx.comdevx.com.

  2. Astute Electronics News, “U.S. Officials Highlight Taiwan Chip Reliance as National Security Risk” (Aug 29, 2025). – Reports that ~99% of the world’s most advanced chips are manufactured in Taiwan, which officials call a dangerous single point of failure in global supply chains. Emphasizes the national security and economic risks of overdependence on TSMC and the need to geographically diversify semiconductor productionastutegroup.comastutegroup.com.

  3. Marin Ivezic, “Quantum Computing Modalities: Superconducting Qubits” (PostQuantum Blog, Oct 10, 2023). – Explains how superconducting qubit processors work, noting that cooling to ~10 mK eliminates electrical resistance and allows quantum coherence to be maintainedpostquantum.com. Describes achievements in the field, including Google’s 53-qubit “Sycamore” processor achieving quantum supremacy by performing in seconds a computation that would take millennia on a classical supercomputerpostquantum.com.

  4. Davide Castelvecchi, “IBM Releases First-Ever 1,000-Qubit Quantum Chip” (Scientific American / Nature, Dec 5, 2023). – Announces IBM’s 1,121-qubit “Condor” chip, the first quantum processor to surpass 1,000 qubits. Discusses IBM’s shift toward improving error correction, and notes that quantum computers leverage entanglement and superposition to tackle problems beyond the reach of classical computingscientificamerican.com.

  5. Tshilidzi Marwala, “Rethinking Tech and Why GPUs Are Not the Future of AI Training” (United Nations University, Apr 2, 2025). – Argues that GPUs, while central to current AI, are extremely energy-intensive and face scaling issues. Points out the rising power and cooling requirements of GPU-based AI infrastructure and advocates for alternative hardware paradigms to sustain AI progressunu.edu.


 
 
 

Comments


Publicar: Blog2_Post

Subscription form

Thank you for your message!

3329053660

Paseo San Carlos 3067, 45019, Zapopan, Jalisco, México

Se informa a los Clientes que Laniakea Technologies, S.A. DE C.V. INSTITUCIÓN DE COMERCIO ELECTRÓNICO (“LANIAKEA TECHNOLOGIES”), se encuentra autorizada, regulada y supervisada por las autoridades financieras; asimismo se informa que el Gobierno Federal y las Entidades de la Administración Pública Paraestatal no podrán responsabilizarse o garantizar los recursos de los Usuarios que sean utilizados en las operaciones que celebren los Usuarios con LANIAKEA TECHNOLOGIES o frente a otros, ni asumir alguna responsabilidad por las obligaciones contraídas por LANIAKEA TECHNOLOGIES o por algún Usuario frente a otro, en virtud de las operaciones que celebren.
LANIAKEA TECHNOLOGIES S.A. de C.V. Institución de Comercio Electrónico - Todos los derechos reservados © 2024
background2.png
Available_on_the_App_Store_(black).png
en_badge_web_generic.png
20200225-GalaxyStore_English.png
Screenshot 2025-01-23 at 19.46_edited.jpg
bottom of page