主编按语
ChatGPT 已成为公众关心的AI标志性新技术,吴志强院士团队也早在几年前就开始持续跟踪624项AI技术,发现和应用深度学习的重大突破,进一步创新拓展数字化技术的应用场景。新一代人工智能赋能规划技术,带来规划领域包括思想、技术、方法、管理在内的全面变革,也需要我们审思新技术的局限性和伦理性。
为此,本刊编辑部特邀11位专家进行“新一代人工智能赋能城市规划:机遇与挑战”学术笔谈,相信我们的读者一定会从中得益,期待你们将自己的体会和思考反馈给《城市规划学刊》编辑部。
Otthein HERZOG(Tongji University, Shanghai, and University of Bremen, Germany)
Urban and regional planning provide many challenges and potentials because of the interdisciplinary nature of many of the areas to be covered by “good” planning outcomes that eventually lead to “good” city operations. In this context, “good” means especially “good” for the people supposed to live in the city or district under planning, also in respect to the vastly different innovation cycles encountered for cities. Some examples influencing city structures are Information and Communication Technology 5 years, automobile 15 years, heating technology 20 years, building construction 60 years, city thoroughfares 80 years, and wastewater infrastructure 100 years.
On the one hand, a city planner must start from the present needs of the people and the technologies to satisfy them, but on the other hand city structures must be planned to be built as flexible as possible in order to react to and incorporate future technologies. While the latter could be covered by predictions, e. g., by scenarios created by transdisciplinary experts or, in the shorter term, by predictions based on data of the past, the current needs of the people and the required technologies can be dealt with by evidence-based planning based on requirements acquired from citizens and all available data sources in a city, such as mobility and logistics data from public transportation or vehicle frequency counts by sensors on the roads, energy and water consumption data, environmental data such as air quality indicators, Greenhouse Gas emissions, population health data, urban production data, industry clusters, and public services.
Much progress has been made during the last 20 years to generate and collect the data needed for urban planning, to determine proper indicators for various urban properties and to use the data, e. g., using graphical representations, statistics, and big data analytics to determine trends, interrelationships, and dependencies. In this approach, the data, their representations, and the analysis results form the foundations of a specific model for the planning task at hand that resides in the mind of the respective urban planner. Correlation analysis is an especially useful tool in this context because it allows for detecting dependencies between different data variables even if causality cannot generally be deduced.
An example for this approach can be found in where first city environmental indicators relevant to environmental air quality conditions of four Chinese cities were determined, such as highways, percentage of paved roads, real-time traffic data, industry clusters of different industry types, shopping centers, and public transportation facilities. Correlation analyses were then carried out that determined that, e.g., better public transport correlated to better Air Quality Index (AQI), and that AQI changes caused by industry clusters varied vastly throughout the day, while car emissions contributed greatly to increased AQI. Using the same data, also a city specific cost model could be defined, and moreover, could be used to train a Back Propagation Neural Network (BPNN) to provide AQI predictions for the four cities under different assumptions. Therefore, the knowledge won from the data and incorporated in the BPNN could drive the BPNN as a decision support system, where formerly, a simulation system would have been to be programmed and run.
This example outlines quite well the trade-offs of conventional programming vs. Neural Networks (NN): while conventional programming requires different layers of more and more models formalized through programming languages, arriving finally at a running (and hopefully provably correct) program, this effort is replaced in the NN case by the proper determination of indicators, the subsequent collection of appropriate data representing the indicators, and the following (hardware-wise costly) training of the NN. The burden of system implementation has been shifted in the NN case away from coding towards the selection of indicators and their related data (examples). This means that an inadequate selection of examples can lead to bias, to overspecification, or even to missing parts of the model trained into an NN. Therefore, much of the work that had traditionally been devoted to coding, must go now into the selection, cleaning, and checking of the examples—indispensable steps to ensure the viability of the approach.
The same cautionary remarks apply also to Large Language Models (LLM), at least as much as the data is concerned that is supposed to be the foundation of the incorporated training step aiming at knowledge acquisition. What really differentiates them from “ordinary” NNs is the fact that their training can derive knowledge even from natural language texts (and even conventional program code). This certainly opens the way of interaction with “computers” for a “natural” communication. However, as the knowledge in the LLMs is basically coded only in very long strings composed of the next most likely character, the LLMs don’t have the capabilities to logically deduce knowledge. They are even able to hallucinate answers as natural language texts that do not relate to any facts in their training data and can be plainly wrong! But given a (in the best case verified) data and text input for their training, and restricting them to application areas with well-determined knowledge bases, LLM technology constitutes the next generation tool for many applications without the need of tedious coding work at the syntax level, just by the power of bi-directional natural language communication at the semantical level at least for the human partner.
For urban and regional planning, the LLM approach will be—and that is at least what I believe—the biggest step of this application field towards fully integrated information technology for decision making: think of feeding all relevant text books, rules, laws, etc. for a specific planning subject into an LLM: this definitely will enable more comprehensive planning cycles, and, maybe most importantly of all, will enable the urban and regional planners to get their arms around the multiple interdependencies between the many aspects of the different planning areas. Moreover, the LLM approach will enable also the acquisition of the knowledge needed for the development of Digital Twins representing all important dynamic aspects of a city thus bringing the LLM technology also to city operations. In this way, evidence-based (partial and interacting) validated models will become a solid foundation of the urban and regional planning tasks as well of city operations.
同主题阅读
【学术笔谈】主编按语 | 新一代人工智能赋能城市规划:机遇与挑战
本文为《城市规划学刊》原创文章
欢迎在朋友圈转发
识别二维码 订阅本刊2017-2022年电子阅读版
URBAN PLANNING FORUM Since 1957
创新性、前瞻性、学术性
中文核心期刊、中国科技核心期刊、中国人文社会科学核心期刊、中文社会科学引文索引来源期刊(CSSCI)、中国期刊全文数据库(CJFD)全文收录期刊,中国学术期刊综合评价数据库(CAJCED)统计源期刊,中国科学引文数据来源期刊,RCCSE中国核心学术期刊
投稿链接:http://cxgh.cbpt.cnki.net
Email: upforum@vip.126.com
电话:021-65983507
传真:021-65975019
微信号:upforum
原文始发于微信公众号(城市规划学刊upforum):【学术笔谈】Otthein HERZOG | 基于循证和模型的城市规划机遇与挑战