San Jose, CA, United States

Silicon Valley 411

www.sv411.com/
San Jose, CA, United States
SEARCH FILTERS
Time filter
Source Type

News Article | May 19, 2017
Site: www.forbes.com

Over the past several years, there has been an ongoing narrative that a battle has sprung up between Silicon Valley and the auto industry. The tech industry hype machine wants the world to believe that venture capital-backed startups are going to appear with some magic technology that disrupts and destroys the century-old incumbents. The reality is likely to turn out quite differently, with some of the brightest minds in the valley coming up with cool ideas that become a key part of the transportation ecosystem. Tech Has Saved the Automobile Industry Before The fact that the auto industry has remained vibrant over the past 50 years can in large part be traced to innovations that have emerged from the San Francisco Bay Area, particularly the silicon microprocessor that gave the region its nickname. At the onset of environmental regulation at the end of the 1960s, most of the functional aspects of cars were mechanically controlled, and these vehicles consumed more fuel and spewed more pollution than they do today. As engineers struggled to meet the new regulatory requirements, the industry entered what became known to car enthusiasts like myself as the malaise era. Attempts to better control engines through mechanical means like vacuum lines led to many terrible engines with weak output, awful drivability, and barely improved emissions and efficiency. Silicon Valley saved the auto industry from being suffocated by regulations. As early microprocessors and sensors were applied to engine and transmission management as well as new safety systems like anti-lock brakes, it became clear that computers in the car would be the key to enhanced driving. By the mid-1980s, electronic controls were enabling engineers to extract more power while using less fuel and cleaning up emissions. As fuel economy regulations stopped climbing, car companies offered customers improved performance and capability without making them spend more at the pump. After earning my degree in mechanical engineering, I spent the next 17 years working on improving vehicles through  more sophisticated software running on a series of cheaper, yet more powerful slivers of silicon. Today’s most sophisticated vehicles utilize anywhere from 50 to 100 onboard computers to manage everything from lights that follow the angle of the steering wheel to automatically maneuvering a truck to connect a trailer.


Hazard J.,Silicon Valley 411 | Stieber H.,European Commission
New Economic Windows | Year: 2016

In 16th century Europe, the revolution in printing technology and increasing literacy in European cities created a positive shock to capital productivity. At the same time, the spread of Protestantism in Northern Europe induced individuals to honour contracts or risk exclusion from the Kingdom of God. Max Weber would argue that the religious institution of Protestantism, by dissuading defection from agreements, had allowed a new form of almost trustless exchange with strangers. Strict self-enforcing religious rules restrained individuals from opportunistic behaviour thus lowering the cost of monitoring and enforcing contracts. This led to increasing commerce and economic growth. A better capitalized, but less strict Catholic Southern Europe was unable to exert control and reduce contracting costs in the same way leading to less exchange. We argue that peer to peer technologies, such as Bitcoin, Blockchains, smart contracts, and peer-to-peer (P2P) legal platforms recall these historical evolutions. We anticipate that these technologies will reduce the cost of contracting, specifically with regards to contract monitoring and enforcement. Trustless exchange without some of the current intermediaries specializing in monitoring and enforcement technologies will have a significant impact on the financial system and its institutional structure. Moving beyond theory, this chapter discusses some of the major manifestations of technologies capable to strongly decrease the cost of contracting, and it proposes a certain class of models to explore how P2P technologies, and the concomitant reduction in transaction costs they will cause, can be expected to affect financial exchange. © Springer International Publishing Switzerland 2016.


Potter C.,NASA | Klooster S.,California State University, Monterey Bay | Hiatt C.,California State University, Monterey Bay | Genovese V.,California State University, Monterey Bay | Castilla-Rubio J.C.,Silicon Valley 411
Environmental Research Letters | Year: 2011

Satellite remote sensing was combined with the NASA-CASA (Carnegie Ames Stanford Approach) carbon cycle simulation model to evaluate the impact of the 2010 drought (July through September) throughout tropical South America. Results indicated that net primary production in Amazon forest areas declined by an average of 7% in 2010 compared to 2008. This represented a loss of vegetation CO2 uptake and potential Amazon rainforest growth of nearly 0.5Pg C in 2010. The largest overall decline in ecosystem carbon gains by land cover type was predicted for closed broadleaf forest areas of the Amazon river basin, including a large fraction of regularly flooded forest areas. Model results support the hypothesis that soil and dead wood carbon decomposition fluxes of CO2 to the atmosphere were elevated during the drought period of 2010 in periodically flooded forest areas, compared to those for forests outside the main river floodplains. © 2011 IOP Publishing Ltd.


Aguiar A.P.D.,National Institute for Space Research | Ometto J.P.,National Institute for Space Research | Nobre C.,National Institute for Space Research | Lapola D.M.,Claro | And 7 more authors.
Global Change Biology | Year: 2012

We present a generic spatially explicit modeling framework to estimate carbon emissions from deforestation (INPE-EM). The framework incorporates the temporal dynamics related to the deforestation process and accounts for the biophysical and socioeconomic heterogeneity of the region under study. We build an emission model for the Brazilian Amazon combining annual maps of new clearings, four maps of biomass, and a set of alternative parameters based on the recent literature. The most important results are as follows: (a) Using different biomass maps leads to large differences in estimates of emission; for the entire region of the Brazilian Amazon in the last decade, emission estimates of primary forest deforestation range from 0.21 to 0.26 Pg C yr -1. (b) Secondary vegetation growth presents a small impact on emission balance because of the short duration of secondary vegetation. In average, the balance is only 5% smaller than the primary forest deforestation emissions. (c) Deforestation rates decreased significantly in the Brazilian Amazon in recent years, from 27 Mkm 2 in 2004 to 7 Mkm 2 in 2010. INPE-EM process-based estimates reflect this decrease even though the agricultural frontier is moving to areas of higher biomass. The decrease is slower than a non-process instantaneous model would estimate as it considers residual emissions (slash, wood products, and secondary vegetation). The average balance, considering all biomass, decreases from 0.28 in 2004 to 0.15 Pg C yr -1 in 2009; the non-process model estimates a decrease from 0.33 to 0.10 Pg C yr -1. We conclude that the INPE-EM is a powerful tool for representing deforestation-driven carbon emissions. Biomass estimates are still the largest source of uncertainty in the effective use of this type of model for informing mechanisms such as REDD+. The results also indicate that efforts to reduce emissions should focus not only on controlling primary forest deforestation but also on creating incentives for the restoration of secondary forests. © 2012 Blackwell Publishing Ltd.


Faggin F.,Silicon Valley 411
Mondo Digitale | Year: 2015

After elucidating the fundamental concepts of consciousness, computer, and living cell, this article considers the crucial difference between a cell and a computer. The conclusion is that a cell is a dynamic and holistic nanosystem based on the laws of quantum physics, whereas a computer is a "static" system using the reductive laws of classical physics. The essence of consciousness is its capacity to perceive and know through sensations and feelings. However, there is no known physical phenomenon allowing the conversion of electrical activity, either in a computer or in a brain, into feelings: The two phenomena are incommensurable. To explain the nature of consciousness, the author introduces a model of reality based on cognitive principles rather than materialistic ones. According to this model, consciousness is a holistic and irreducible property of the primordial energy out of which everything is made (space, time, and matter). As such, consciousness can only grow if the components of a system combine holistically, like it happens in a cell. But since the computer is a reductionistic system, its consciousness cannot grow with the number of its elementary components (the transistors), thus remaining the same of that of a transistor.


Sanders S.R.,University of California at Berkeley | Alon E.,University of California at Berkeley | Le H.-P.,University of California at Berkeley | Seeman M.D.,Silicon Valley 411 | And 2 more authors.
IEEE Transactions on Power Electronics | Year: 2013

This paper provides a perspective on progress toward realization of efficient, fully integrated dc-dc conversion and regulation functionality in CMOS platforms. In providing a comparative assessment between the inductor-based and switched-capacitor approaches, the presentation reviews the salient features in effectiveness in utilization of switch technology and in use and implementation of passives. The analytical conclusions point toward the strong advantages of the switched-capacitor (SC) approach with respect to both switch utilization and much higher energy densities of capacitors versus inductors. The analysis is substantiated with a review of recently developed and published integrated dc-dc converters of both the inductor-based and SC types. © 2012 IEEE.


Stephen (steve) perlman is the founder of OnLive and WebTV. He is known for the invention of QuickTime, which is built into all Apple computers and phones [1]. In February, he announced a wireless broadband technology called pCell created by his latest start-up, Artemis Networks. The company has been working on this technology for ten years under the code name of DIDO. This technology will enable full-speed wireless broadband to every mobile device, regardless of how many users are using the same wireless spectrum at once [2]. The technology is compatible with existing fourth-generation (4G) standards, such as the LTE that is used by the most recent mobile phones. Before we take a careful look at the technology, let us step back and put this into context. © 2014 IEEE.


Rosenthal D.S.H.,Silicon Valley 411
Communications of the ACM | Year: 2010

Protection of data bits can be made possible by making more and independent soft copies and auditing copies more frequently. It is assumed that disk and sector failures are the only failures contributing to the system failures and that all other threats to stored data as possible causes of data loss. The NetApp study identified the incidence of silent storage corruption in individual disks in RAID arrays and found more than 4×105 silent corruption incidents. Identical systems are subject to common mode failures, such as those caused by a software bug in all the systems damaging the same data in each. Fast array of wimpy nodes (FAWN) couples low-power embedded CPUs to small amounts of local flash storage, and balances computation and I/O capabilities to enable efficient, massively parallel access to data.


Walker A.J.,Philips | Walker A.J.,Silicon Valley 411
IEEE Transactions on Semiconductor Manufacturing | Year: 2013

Scaling challenges with NAND flash have forced manufacturers to consider monolithic 3-D process and device architectures as potential successor technologies. Those that involve a vertical cylindrical channel are regarded as favorites. These include bit-cost scalable (BiCS) NAND, pipe-shaped bit-cost scalable (p-BiCS) NAND, and terabit cell array transistor (TCAT) NAND. It has been assumed that their manufacturing costs decrease monotonically with the number of additional device layers. This paper presents a rigorous analysis of this assumption based on recently reported challenges associated with the construction of these architectures. It is shown that there is a minimum in die cost after which costs increase with increasing device layers. Also, achievable die sizes using these approaches may not even reach existing production NAND Flash. An important consequence is that monolithic 3-D approaches that involve more lithography-intensive steps may actually result in lower total cost provided that these scale appropriately © 1988-2012 IEEE.


Loading Silicon Valley 411 collaborators
Loading Silicon Valley 411 collaborators