Copyright © 2025 Authors retain the copyright of this article. This article is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
@article{160738, author = {Dr.K.Kalpana and Dr.B.Paulchamy and Dr C.Natarajan and J.B.Jebish Kumar}, title = {Improved Elliptical Cryptography in FPGA Processor}, journal = {International Journal of Innovative Research in Technology}, year = {}, volume = {10}, number = {1}, pages = {1101-1110}, issn = {2349-6002}, url = {https://ijirt.org/article?manuscript=160738}, abstract = {Moore's law, which asserts that the amount of microprocessor technology (measured in terms of the number of transistors) doubles about every two years, explains the fast advancement of this technology in today's world. Memory that is attached to the technology has to be able to store a significant quantity of data as it continues to advance. If the memory is located off-chip from the CPU, the speed at which data can be accessed or stored is slower, and the latency is higher. If the memory is located on the same chip as the CPU, the speed is increased, making it quicker to retrieve the data and resulting in reduced latency. On-chip cache memory has to be constructed in such a manner that it can store a high quantity of data without increasing the amount of space it takes up on the chip. Compression and decompression of cache memory must be used for high-speed microprocessors in order to access vast amounts of data without reducing the performance of the microprocessor, without increasing its size, and without using more power. This paper proposes and designs a lossless method for high speed processors, focusing specifically on cache compression and decompression techniques in particular. A cache memory compression and decompression technique has been suggested and developed within the scope of this study. This method of compression enables parallel compression of several words while operating in dictionary mode. By using parallel compression, it is possible to cut the input word in half and then insert each half into the dictionary entry separately. In the beginning, the input word length was tested with 32 bits, and now that number has been expanded to 64 bits and tested again. When using dictionary mode, data may be accessed more quickly since it quickly finds matches with previously searched data. The suggested technique has been simplified down to a register transfer level design, which makes it possible to estimate performance, power consumption, and area. There is no decrease in compression ratio as a result of the performance. Comparisons are made between the compression ratio and that of other techniques currently in use. When compared with IBM's Memory Expansion Technology, the output results of the sim}, keywords = {Cryptography, C-Pack Compression, Data Compression, FPGA, Frequent Pattern Compression}, month = {}, }
Cite This Article
Submit your research paper and those of your network (friends, colleagues, or peers) through your IPN account, and receive 800 INR for each paper that gets published.
Join NowNational Conference on Sustainable Engineering and Management - 2024 Last Date: 15th March 2024
Submit inquiry