Tag Archives: Nature

tlab alumnus publishes in Nature

tlab alumnus Dr. Jens Bürger publishes in Nature Humanities and Social Sciences Communications. Way to go!

Bürger, J., Laguna-Tapia, A. Individual homogenization in large-scale systems: on the politics of computer and social architectures. Palgrave Commun 6, 47 (2020). https://doi.org/10.1057/s41599-020-0425-4

Abstract: One determining characteristic of contemporary sociopolitical systems is their power over increasingly large and diverse populations. This raises questions about power relations between heterogeneous individuals and increasingly dominant and homogenizing system objectives. This article crosses epistemic boundaries by integrating computer engineering and a historicalphilosophical approach making the general organization of individuals within large-scale systems and corresponding individual homogenization intelligible. From a versatile archeological-genealogical perspective, an analysis of computer and social architectures is conducted that reinterprets Foucault’s disciplines and political anatomy to establish the notion of politics for a purely technical system. This permits an understanding of system organization as modern technology with application to technical and social systems alike. Connecting to Heidegger’s notions of the enframing (Gestell) and a more primal truth (anfänglicheren Wahrheit), the recognition of politics in differently developing systems then challenges the immutability of contemporary organization. Following this critique of modernity and within the conceptualization of system organization, Derrida’s democracy to come (à venir) is then reformulated more abstractly as organizations to come. Through the integration of the discussed concepts, the framework of Large-Scale Systems Composed of Homogeneous Individuals (LSSCHI) is proposed, problematizing the relationships between individuals, structure, activity, and power within large-scale systems. The LSSCHI framework highlights the conflict of homogenizing system-level objectives and individual heterogeneity, and outlines power relations and mechanisms of control shared across different social and technical systems.

Nature paper on “Adversarial explanations for understanding image classification decisions and improved neural network robustness”

Our latest work was published in Nature Machine Intelligence this week: Woods, W., Chen, J. & Teuscher, C. Adversarial explanations for understanding image classification decisions and improved neural network robustness. Nat Mach Intell (2019) doi:10.1038/s42256-019-0104-6

“Deep neural networks can be led to misclassify an image when minute changes that are imperceptible to humans are introduced. While for some networks this ability can cast doubt on the reliability of the model, it also offers explainability for networks that use more robust regularization.”

Abstract: For sensitive problems, such as medical imaging or fraud detection, neural network (NN) adoption has been slow due to concerns about their reliability, leading to a number of algorithms for explaining their decisions. NNs have also been found to be vulnerable to a class of imperceptible attacks, called adversarial examples, which arbitrarily alter the output of the network. Here we demonstrate both that these attacks can invalidate previous attempts to explain the decisions of NNs, and that with very robust networks, the attacks themselves may be leveraged as explanations with greater fidelity to the model. We also show that the introduction of a novel regularization technique inspired by the Lipschitz constraint, alongside other proposed improvements including a half-Huber activation function, greatly improves the resistance of NNs to adversarial examples. On the ImageNet classification task, we demonstrate a network with an accuracy-robustness area (ARA) of 0.0053, an ARA 2.4 times greater than the previous state-of-the-art value. Improving the mechanisms by which NN decisions are understood is an important direction for both establishing trust in sensitive domains and learning more about the stimuli to which NNs respond.

Open Access pre-print: https://arxiv.org/abs/1906.02896

Podcast: https://www.stitcher.com/podcast/the-data-skeptic-podcast/e/67341825

Reddit thread: https://www.reddit.com/r/MachineLearning/comments/ds0st4/r_adversarial_explanations_for_understanding