Computer Center

Tirupati, India

Computer Center

Tirupati, India

Time filter

Source Type

Wu W.-C.,Computer Center | Chen Y.-M.,National Central University
Lecture Notes in Electrical Engineering | Year: 2015

In 2011, Yeh et al. proposed a PAACP: A portable privacy-preserving authentication and access control protocol in vehicular ad hoc networks. However, PAACP in the authorization phase is breakable and cannot maintain privacy in VANETs. In this paper, we present a cryptanalysis of an attachable blind signature and demonstrate that the PAACP’s Authorized Credential (AC) is not secure and private even if the AC is secretly stored in a tamper-proof device. An eavesdropper can construct an AC from an intercepted blind document. Any eavesdropper can determine who has which access privileges to access which service. For this reason, this paper copes with these challenges and proposes an efficient scheme. We conclude that a simple authentication scheme and access control protocol for VANETs not only resolves the problems that have appeared, but also is more secure and efficient. © Springer Science+Business Media Dordrecht 2015.


Wu W.-C.,National Central University | Wu W.-C.,Computer Center | Chen Y.-M.,National Central University
Applied Mathematics and Information Sciences | Year: 2012

Vehicular ad hoc networks (VANETs) are emerging to improve road safety and traffic management. Privacy and security are very important in VANETs. Existing authentication protocols to secure VANETs raise challenges such as certificate distribution and reduction of the strong reliance on tamper-proof devices. Recently, Yeh et al. proposed a portable privacy-preserving authentication and access control protocol in vehicular ad hoc networks (PAACP). However, PAACP in the authorization phase is breakable and cannot keep privacy in VANETs. In this paper, we present a cryptanalysis of an attachable blind signature and show that the PAACP's Authorized Credential (AC) is not secure and private even the AC secretly stored in a tamper-proof device. Our analysis showed that in PAACP, an eavesdropper can construct the AC from an intercepted blind document. As a result, PAACP in the authorization phase is breakable, and as any outsider can know who has which access privileges to access which service, the user's privacy in VANETs is jeopardized. © 2012 NSP Natural Sciences Publishing Cor.


News Article | January 11, 2016
Site: www.scientificcomputing.com

The Deutsches KlimaRechenZentrum (DKRZ), German Climate Computer Center, supercomputer is ranked among the largest systems employed for scientific computing. On October 5, 2015, Germany enhanced its leadership in climate research with the inauguration of Mistral — a state-of-the-art HPC system and one of the world’s most efficient supercomputers. The Mistral HPC system is 20 times faster than the previous supercomputer and features a large storage system to house the large climate simulation data archive managed by DKRZ. Using the Mistral system and high performance computing (HPC) tools will keep DKRZ at the forefront in supporting scientific and climate modeling research. Scientists conduct premium climate research and are able to simulate anthropogenic influences on the climate system, which includes cloud research. Clouds and precipitation strongly influence atmospheric radiation and are critical for life on earth. The scale of clouds spans from a micrometer — which is the size of a single cloud particle — to hundreds of kilometers, which is the dimension of a frontal system. Researchers have to resolve all ranges, which makes exact modeling of clouds and precipitation physically difficult and extremely resource-consuming in terms of computer time and storage space. Climate modeling research requires supercomputers that combine the power of thousands of computers and HPC tools to simulate complex climate models and research problems. Mistral is used in the High Definition Clouds and Precipitation for Climate Prediction-HD(CP)2 project, which integrates cloud building and precipitation processes into atmospheric simulations to better understand and research clouds and cloud related processes. The project uses cloud resolving modeling to determine cloud formations in central Europe. According to Professor Thomas Ludwig, Director of the German Climate Computing Center, “The unique characteristic of HD(CP)2 is to develop a cloud resolving LES version (Large Eddy Simulation) of the ICON model (Icosahedral non-hydrostatic general circulation model, a joint development of the German Weather Service DWD and the Max Planck Institute for Meteorology, see e.g. www.mpimet.mpg.de/en/science/models/icon.html) in order to explicitly simulate cloud and precipitation processes. The model region is centered on Germany using a grid with a resolution of 10,000 x 10,000 x 400 grid elements and a grid spacing of 100 m (www.hdcp2.eu). Such simulations are computationally very intensive and the necessary computing power can be found only on massively parallel computing platforms. In order to achieve this, DKRZ performed a major refactoring of the ICON model.” Figure 2 shows a visualization of the simulated cloud water content for one time step with about 3.5 billion cells per time step (22.5 million cells per slice on 150 levels). The data is on-the-fly resampled from an unstructured ICON grid onto a regular Cartesian grid with a down sampling of 1/10. The ICON simulation was performed using over 400 nodes of Mistral, while the visualization was done using the Vapor software on one single GPU node of the system, thereby consuming over 200 GB of main memory. “The ability to conduct this level of cloud and atmospheric research requires the use of a state-of-the-art HPC system. Using Mistral and HPC tools allows DKRZ to run new processes and ensemble members as well as see clouds or local climate at a higher resolution. Per core, we see a performance improvement of our models between 1.8 and 2.6 using one Intel Xeon processor core, as compared to one 4.7 GHz IBM Power6 core. In times where scientists expect performance gains only through scaling, this is a welcome advancement,” Ludwig states. Mistral, the new High Performance Computer System for Earth System Research (HLRE-3) is from the French company Bull, which was purchased by Atos in 2014. Mistral replaces the IBM Power6 system named Blizzard which was in operation at DKRZ since 2009. The Mistral supercomputer is being installed in two stages. Phase 1 of the Mistral system began in June 2015, with the second stage of Mistral expansion scheduled for summer 2016. Parallel to the installation of the Mistral system in Phase 1, DKRZ users had access to a small test system with 432 Intel Xeon processor cores and a 300 TByte Lustre file system from Xyratex/Seagate, for the purpose of preparing climate models for the new architecture. During the testing, DKRZ provided training classes on how to use the new system in the areas of debugging, machine usage and using visualization tools. The Mistral System consists of computer components by Bull, a disk storage system by Xyratex/Seagate and high performance network switches by Mellanox. These components are distributed over 41 racks weighing up to or even more than a metric ton, which are connected by bundles of fiber fabric. The Phase 1 Mistral supercomputer has about 1,500 compute nodes on the basis of the bullx 700 DLC system each with two 12 core Intel Xeon processors 2680 v3 (for a total of 36,000 cores) — the system and racks deploy hot liquid direct cooling. The Mistral Intel processor-based system allows an inlet cooling liquid temperature of 40 degrees centigrade. The hot liquid heats up to 50 degrees centigrade and is piped to the roof for cooling by fans only. “This means that all the racks that have the hot liquid cooling do not require additional expensive chillers, as the temperature on the roof in Hamburg almost never exceeds 40 degrees,” states Ludwig. Mistral provides 24 high-end visualization nodes equipped with powerful graphics processors and 100 further nodes for pre- and post-processing and analysis of data. All components are connected with each other via optical cables and can directly access the shared file system. This means that the results of modeling calculated on the supercomputer can be directly analyzed on the data visualization nodes. DKRZ does not conduct climate research itself but supports climate modeling and related scientific research. Ludwig indicates, “We participate in various infrastructure and research projects with the aim to support the climate scientists in all aspects of their work in our HPC environment. DKRZ departments support scientists in model parallelization and optimization of the code, data management, storage, data compression, analysis and visualization, help with libraries, improving I/O as well as quality assurance and archiving of data.” DKRZ uses the Allinea DDT debugging tool and Vampire and Intel VTune software as performance tuning tools and Vapor as the visualization tool. The DKRZ staff creates customized in-house tools to help with issues such as data compression, scalability and visualization issues in the parallel climate simulations and research models. There is close cooperation with Ludwig´s research group from the University of Hamburg. In fact, his chair for Scientific Computing has his offices in the DKRZ building. A group of 10 researchers focus on file system and storage issues and on energy efficiency for HPC. In the Mistral phase 1 system, DKRZ uses a 20 PBytes Lustre filesystem based on a Xyratex/Seagate CS9000 system with a bandwidth in excess of 150 GB/s. The metadata performance is outperforming its competitors. The Lustre filesystem will be expanded to 50 PBytes and 430 GB/s in 2016 as the Mistral system expands. According to Ludwig, “In addition to supporting our users to efficiently utilize the supercomputer, we engage in joint projects to enable new science on the current and future systems. Since our users run a large diversity of different models on our system, DKRZ also develops universally usable libraries to facilitate scalable parallel models (YAXT) and make better use of the available storage capacity through data compression (libAEC).” In addition to their other services, DKRZ manages the world’s largest climate simulation data archive. The archive is used by researchers worldwide and contains massive amounts of data. The archive currently contains more than 40 PBytes of data and is projected to grow by 75 PBytes annually over the next five years. “There is a growing gap in the ability of HPC systems to generate large amounts of data and the cost of storage to store this data. DKRZ estimates we are currently spending 25 percent of our investment budget, as well as the electricity expenses, on storage, and we expect this gap to increase for the climate modeling data created in the future. The widening gap between compute capabilities and storage is a problem which means we need to shift some focus to how to maximize storage if you want to keep the balance in the ability to store all the data being generated.” “Lustre as a file system gets constantly increasing support from major vendors and from the computer science community,” Ludwig said. “We are confident that emerging requirements will be picked up quickly and solutions can be provided promptly.” DKRZ supports CMIPs and the Intergovernmental Panel on Climate Change (IPCC) Research DKRZ performs simulations for the research community, such as the Climate Modeling Intercomparison Projects (CMIPs), which build the foundation for the findings presented in the IPCC reports. Climate modelers in Germany worked on the IPCC project performing calculations using the DKRZ computer with an Earth system model from the Max Planck Institute for Meteorology that also simulated the carbon cycle. Ludwig indicates, “We stored approximately 2 PBytes of CMIP5 data on the Mistral machine from DKRZ and international centers. Planning is going on for how much data DRKZ will receive on the next CMIP6 project. It is expected there will be 20 to 50 times more data. DKRZ expects to begin computations for the next IPCC report in 2016. The German climate model data contribution for publications that will be included in the next IPCC assessment report will start to be released in 2016 and will be computed exclusively on the expanded Mistral machine. The DKRZ Center has extensive experience in computations and data dissemination for the IPCC report, and with the new Mistral system, we have a powerful computer and storage system to host at least all the computations that will be conducted on the German side, and probably more.” Installation of the expanded Mistral system is predicted to start in February of 2016 and will also use the Bull direct hot liquid cooling being used in the Mistral Phase 1 system. According to Helena Liebelt, Intel Business Development Manager, “The second phase of the Mistral HLRE-3 System is planned to be available in summer 2016. The expanded Mistral system in 2016 will have more than 3,000 computing nodes and more than 68,000 cores. This extension will roughly double computing and disk storage capacity. With a peak performance of 3 PFlops and a 50 PByte parallel file system, scientists can improve the regional resolution, account for more processes in the Earth system models or reduce uncertainties in climate projections.” Ludwig estimates the expanded Mistral system will be in the Top 100 of the June 2016 TOP500 list and in the top five in the file system capabilities — making Mistral one of the top HPC systems worldwide for storage. While DKRZ uses Seagate Lustre in the production environment of Mistral, Ludwig´s research group became a member of Intel´s Parallel Processing Center for Lustre (IPCC-L) and will conduct research on data compression mechanisms. The group is also using Intel Xeon Phi coprocessors and graphics processing units (GPUs) in their test environment. How HPC will aid climate modeling in the future Professor Ludwig indicates that computer scientists face a number of challenges in climate modeling, including “the growing number of cores and the fact that parallelization is becoming more complicated due to multiple runs of climate simulations which are mathematically non-linear. Memory bandwidth is always a problem, because climate modeling applications are memory intensive. The ability to modify code to take advantage of HPC parallelization and optimization is a problem because of legacy code and not enough software engineers to adapt codes. Growing energy requirements may become a limitation in providing more computational power to future climate models. In addition, I/O bandwidth and storage capacity growth may be even harder to maintain. Science is looking to computer scientists to develop software that can handle the huge number of computing elements.” As supercomputers such as Mistral and HPC tools advance, it will be possible to create finer grids and more grid cells, which will provide a higher resolution of climate information. The German government has funded a project called PalMod that takes the opposite approach and uses a coarse grid for a very long simulated time period. It seeks to apply today’s climate models against 135,000 years of data going back to the ice age. The hope is that this will allow researchers to recompute climate data to see how effective the current climate models are in showing past climate changes and as a way to predict climate changes in the future. DKRZ will be involved in supporting PalMod. However, today’s many and multi core processor architectures will probably not be sufficient to achieve the desired performance: a challenge to be addressed jointly by DKRZ and industry. “DKRZ is the link between hardware vendors, solution providers and the climate research community. Its vision is to make the potential of accelerating technical progress reliably accessible to climate research. We closely follow technological trends and are in permanent contact with companies such as processor producers. At the same time, we participate in climate research projects to learn about the future resources that will be necessary for new insights. We translate between these communities and communicate scientific requirement specifications and technical product characteristics. An efficient usage of HPC adds optimal value to the science of climate researchers,” states Ludwig. Linda Barney is the founder and owner of Barney and Associates, a technical/marketing writing, training and web design firm in Beaverton, OR.


News Article | November 23, 2016
Site: www.eurekalert.org

A team of researchers led by Jiajun Cao, a PhD candidate in the College of Computer and Information Science (CCIS) at Northeastern University, recently completed what appears to be the largest known instance of transparent checkpointing. Transparent checkpointing allows computer scientists and engineers working on large projects to save and reopen programs without modifying any code. This assures researchers working across hundreds or thousands of computers that their work will be safe in case of a computer failure. Their programs are run on CPU cores, and computers can contain multiple cores, allowing them to run one program simultaneously across multiple cores. Transparent checkpointing could simplify the work of computer scientists handling large amounts of data and using supercomputers to process that data. For example, with transparent checkpointing software, meteorologists can process and analyze billions of pieces of weather data without the fear that a computer crash could erase that work. "The idea of checkpointing is that one can take a running computation, automatically stop it in the middle and save the state of everything to a file on disk," Gene Cooperman, a professor at CCIS and Cao's advisor, explains. "Then you can copy that file to another computer or keep it on the same one. When you restart, the program continues running from where it left off." Cooperman's work with Distributed Multi-Threaded CheckPointing (DMTCP) software, which is responsible for checkpointing, is now in its second decade. What makes this example of transparent checkpointing significant is the massive amount of data that was run and saved in a short period of time. The MVAPICH software supporting the Message Passing Interface (MPI) was used to run the High Performance Conjugate Gradients (HPCG) program for linear algebra in parallel over 32,768 CPU cores on 2,048 computers. It used a total memory of 38 terabytes, and was checkpointed in 10 minutes and 53 seconds. A second program, Nanoscale Molecular Dynamics (NAMD), was run in parallel over 16,368 CPU cores on 1,024 computers, using a total memory of 10 terabytes. It was checkpointed in two minutes and 38 seconds. Checkpointing these amounts of data in 11 minutes or less is a breakthrough for scientists usually restricted by having to run their programs before modifying and saving them within 24-hour time slots. These processes were carried out on the Stampede supercomputer at the Texas Advanced Computer Center (TACC). Stampede is one of the world's largest supercomputers. The research was supported by a grant from the National Science Foundation awarded to Cooperman's DMTCP project, under which Cao's checkpointing research falls. "These results show how the Extended Collaborative Support Services from the National Science Foundation-supported Extreme Science and Engineering Discovery Environment can help scientists and developers improve the scalability and efficiency of their code on high performance computing clusters," says Jérôme Vienne, a research associate at TACC. Dhabaleswar K. Panda, who leads the MVAPICH team at Ohio State, explains that "the results of this collaborative work push the existing capabilities of the MVAPICH2 library further in terms of fault-tolerance and check-pointing." Cao's collaborators include Kapil Arya of Mesophere, Inc.; Rohan Garg and Gene Cooperman of Northeastern University; Shawn Matott of the Center for Computational Research at the State University of New York at Buffalo; Dhabaleswar K. Panda and Hari Subramoni of Ohio State University; and Jérôme Vienne of the Texas Advanced Computing Center at the University of Texas at Austin. The paper, titled "System-level Scalable Checkpoint-Restart for Petascale Computing," is available to read online. This work will be published at the 22nd Institute of Electrical and Electronics Engineers International Conference on Parallel and Distributed Systems (ICPADS) in December 2016.


News Article | November 23, 2016
Site: phys.org

Transparent checkpointing allows computer scientists and engineers working on large projects to save and reopen programs without modifying any code. This assures researchers working across hundreds or thousands of computers that their work will be safe in case of a computer failure. Their programs are run on CPU cores, and computers can contain multiple cores, allowing them to run one program simultaneously across multiple cores. Transparent checkpointing could simplify the work of computer scientists handling large amounts of data and using supercomputers to process that data. For example, with transparent checkpointing software, meteorologists can process and analyze billions of pieces of weather data without the fear that a computer crash could erase that work. "The idea of checkpointing is that one can take a running computation, automatically stop it in the middle and save the state of everything to a file on disk," Gene Cooperman, a professor at CCIS and Cao's advisor, explains. "Then you can copy that file to another computer or keep it on the same one. When you restart, the program continues running from where it left off." Cooperman's work with Distributed Multi-Threaded CheckPointing (DMTCP) software, which is responsible for checkpointing, is now in its second decade. What makes this example of transparent checkpointing significant is the massive amount of data that was run and saved in a short period of time. The MVAPICH software supporting the Message Passing Interface (MPI) was used to run the High Performance Conjugate Gradients (HPCG) program for linear algebra in parallel over 32,768 CPU cores on 2,048 computers. It used a total memory of 38 terabytes, and was checkpointed in 10 minutes and 53 seconds. A second program, Nanoscale Molecular Dynamics (NAMD), was run in parallel over 16,368 CPU cores on 1,024 computers, using a total memory of 10 terabytes. It was checkpointed in two minutes and 38 seconds. Checkpointing these amounts of data in 11 minutes or less is a breakthrough for scientists usually restricted by having to run their programs before modifying and saving them within 24-hour time slots. These processes were carried out on the Stampede supercomputer at the Texas Advanced Computer Center (TACC). Stampede is one of the world's largest supercomputers. The research was supported by a grant from the National Science Foundation awarded to Cooperman's DMTCP project, under which Cao's checkpointing research falls. "These results show how the Extended Collaborative Support Services from the National Science Foundation-supported Extreme Science and Engineering Discovery Environment can help scientists and developers improve the scalability and efficiency of their code on high performance computing clusters," says Jérôme Vienne, a research associate at TACC. Dhabaleswar K. Panda, who leads the MVAPICH team at Ohio State, explains that "the results of this collaborative work push the existing capabilities of the MVAPICH2 library further in terms of fault-tolerance and check-pointing." Cao's collaborators include Kapil Arya of Mesophere, Inc.; Rohan Garg and Gene Cooperman of Northeastern University; Shawn Matott of the Center for Computational Research at the State University of New York at Buffalo; Dhabaleswar K. Panda and Hari Subramoni of Ohio State University; and Jérôme Vienne of the Texas Advanced Computing Center at the University of Texas at Austin. The paper, titled "System-level Scalable Checkpoint-Restart for Petascale Computing," is available to read online. This work will be published at the 22nd Institute of Electrical and Electronics Engineers International Conference on Parallel and Distributed Systems (ICPADS) in December 2016. Explore further: For software developers, more speed and mobility


Cui S.,Sichuan University | Cui S.,Computer Center | Liu D.C.,Sichuan University
IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control | Year: 2011

Ultrasonic elastography is an imaging technique providing information about the relative stiffness of biological tissues. In general, elastography suffers from noise artifacts, which degrade lesion detectability and increase the likelihood of misdiagnosis. This paper proposes a method called transmit- side frequency compounding for elastography (TSFC). Beamforming is modified to transmit frames with N alternating center frequencies. Pairs of frames with the same center frequency are used to calculate sub-elastograms that are then averaged to produce one compounded elastogram. Simulation results based on an uniformly elastic tissue model demonstrate the decorrelation among sub-elastograms and the improvement in elastographic signal-to-noise ratio (SNRe) achieved by compounding sub-elastograms. An elastic phantom experiment further validates the noise reduction obtained by the proposed technique. © 2011 IEEE.


Saxena M.,FGIET | Khan P.M.,Computer Center
2015 International Conference on Computing for Sustainable Global Development, INDIACom 2015 | Year: 2015

The Spam Emails are regularly causing huge losses to business on a regular basis. The Spam filtering is an automated technique to identity SPAM and HAM (Non Spam). The Web Spam filters can be categorized as: Content based spam filters and List based spam filters. In this research work, we have studied the spam statistics of a famous Spambot 'Srizbi'. We have also discussed different approaches for Spam Filtering and finally proposed a new algorithm which is made on the basis of behavioral approaches of Spammers and to restrict the budding economical growth of Spam generating company's. We have used the hidden Honeypot and a Honeytrap module to minimize the spam generated from Contact and Feedback forms on public and social networking CMS websites. © 2015 IEEE.


Wu W.-C.,Computer Center | Liaw H.-T.,Shih Hsin University
Lecture Notes in Electrical Engineering | Year: 2015

Because of the more and more services wireless communication tech- nology can offer nowadays, the quality of wireless communication became an important key. In this research, 3G/UMTS and WLAN, which are two major wireless communication techniques will be mentioned mainly. The former offers a wide-range, high-movability, complete and safe record of accounting; the latter offers a narrow-range, low-movability, high-speed-transmission access on the Internet. The complementary between these two techniques can not only enhance the quality of wireless communication but offer more services for customers to choose, and customers can use wireless application services regardless of any environmental limit. This research will focus on the problem of fast-handover when 3G/UMTS and WLAN is interworking, such as authentication and authorization. About the two formers, we will use W-SKE to accomplish authentication proce- dure, and achieve safer Mutual Full Authentication and Fast-Authentication. © Springer Science+Business Media Dordrecht 2015.


Wu W.-C.,Computer Center | Liaw H.-T.,Shih Hsin University
Security and Communication Networks | Year: 2016

Both computer and telecommunication networks have developed rapidly and become much more popular recently. On the basis of the characteristics of the two networks, we can see that they are complementary. If we can integrate the most popular technologies of these two kinds of networks, the integration will result in a new, attractive network access method. From the viewpoints of both the user and the service provider, the advantages of the integration include increased profits and better support for services. However, the integration of the two heterogeneous networks still presents many problems, the most critical of which concern authentication and billing. In this paper, we propose a practical, efficient, and secure authentication, authorization, and accounting mechanism within the interworking architecture proposed by Third Generation Partnership Project (3GPP). © 2016 John Wiley & Sons, Ltd.


Zhao M.X.,Computer Center
Advanced Materials Research | Year: 2013

This paper mainly researches trim and transplantation of the Linux kernel at the platform of ARM.Transplantation of operating system can be done in the following steps: configuration, trim of source code, cross-compilation. This paper make a detailed description about items above,and the realization of it. Transplantation of operating system is the base of embedded system,so it has a important significance on development of embedded system. © (2013) Trans Tech Publications, Switzerland.

Loading Computer Center collaborators
Loading Computer Center collaborators