I am not in favor of using AI for security reasons to identify people. AI is not accurate enough to facially recognize people, therefore, basing police investigations and evidence on AI algorithms and analytics would not result in accurate outcomes. Furthermore, I believe the use of AI for police enforcement is also dangerous as AI facial recognition is racially biased and can lead to more violence from the police force against minorities. I don’t believe AI technology is evolved enough to be used for such precarious and delicate situations.
I believe the use of AI through facial recognition should not be used in state policing at this point in time. With the high error rates and increasing tension between governments and people I feel like imposing more a watchful eye over society that does not necessarily have a track record of success will only cause conflict and further mistrust in institutions at a time when social cohesion is already crumbling. Perhaps in the future with more development and for higher level crimes this could be a good opportunity but at this point in time the technological error and the privacy issues it concerns, I don’t agree with it.
hmmm although it sounds counter-intuitive and anti-democratic. it is far easier to pick faults in technology that appear to infringe on our personal freedom but it is equally important to look at statistics and data on how the tech has actually helped us already. without the help of security cameras, our success rate of finding a missing child after the initial 24 hours are next to none. without rapid identification it is next to impossible to locate serial murderers before they find their next victim. furthermore, it is not only used in criminal investigations. it is also widely utilized in the government, airports, schools, banking, workplaces and buses… for example, during covid 19, countries relied on the tech to make many services contactless. there is no doubt that the tech is here to stay and the question should not be whether we can ban it, and rather how we can move forward and increase transparency, public awareness, and formal regulations that protect our privacy and civil rights. cuz if we don’t, the state will probably just go ahead and use it secretly and that’s even worse..:(
From my own experience and knowledge, I am strongly against the use of AI technology for mass surveillance and state policing. It raises serious concerns about privacy and civil liberties. In my opinion It is very important to ensure that the use of such technology is transparent, accountable, and respects the rights of individuals. We must carefully consider the potential consequences and weigh the benefits against the risks before using such a risky measures.
I dont believe AI should be used in policing. Not only does it involve use of technology which has inbuilt biases detrimental to already marginalised community, but also it allows for policing to happen at places which would generally be considered as private moment of individual/society which in turn reduces quality of life
I believe that if AI were to be ethically incorporated into policing practices, that the social and economic factors leading to abuse of power and other poor policing practices need to first be addressed. The AIs used are biased because the police are biased. There is a Netflix documentary called “Coded Bias” that explores this issue.
I believe that AI can be incorporated into policing, but it should be done with high caution to social and privacy issues and incorporated solely. Well-incorporated it can increase safety, poorly incorporated it leads to a system where privacy is breached and people are being continuously monitored, which includes a risk of misuse and controlling people. There should be a high regulatory environment to determine where and how AI can be safely used in policing.
I am against the use of AI in state policing efforts. The utilization of AI in surveillance leads very easily to an overstepping citizens’ right to privacy in a world where, even without AI, many police forces already abuse their physical and bureaucratic power. Perhaps ethical AI systems should be developed to police the police–to ensure that they do not act with unnecessary brutality and bias, to protect citizens themselves rather than those who already have and absue their power.
I am against AI being used in policing. Although I feel like my answers are pretty similar every week, I think it still comes down to the idea that there’s not enough resources or research in our current society for us to be able to properly and ethically be able to incorporate AI into our daily lives. Aside from privacy issues, I do feel that there are many important merits with AI in this field such as the accuracy or decrease of bias, but as it was discussed in class, with what we have now, the data that the AI would pull from would be coming from pre-existing human data, which we know are full of error and bias. So similar to other discussions we have had, I feel that there may be a future where some helpful AI tools could be incorporated into policing, but there are still far too many things that need to be put into consideration for this to become normalized now.
Based on examples of social problems arising from the immaturity of facial recognition technology, it becomes evident that AI’s facial recognition capabilities are not yet developed enough to provide reliable assistance. Consequently, widespread adoption of such technology in surveillance and national policing should be reconsidered. Given the need for accuracy and minimal error in legal proceedings, I firmly oppose the current use of AI in police systems. Precision and thoroughness are crucial in matters of law, and until AI’s facial recognition reaches a higher level of maturity, caution should be exercised in its implementation.
I think it’s hard to pick one side in this debate, using AI in surveillance efforts helps to improve speed of identification and efficiency of investigation. It can help in prevention of crime by analyzing times and places that crimes are highly likely to happen. It can also assist in locating missing persons and improving the overall security of public spaces. But I think this kind of surveillance system brings up much more ethical concerns like; The possibility of false identifications, disproportionately impacting marginalized communities. Also these systems can become a tool for control or be misused by authorities; authorities can use it to exert control and domination over the population by monitoring every personal action and behavior of individual. I think in terms of these issues, in future, user empowerment should be considered more in designing these systems so there would be a balance between authority department’s power (police, etc.) and individual right.
I am currently against the use of AI in surveillance and state policing. Although I think that incorporating AI tools would be useful in facilitating investigations and otherwise providing assistance, relying too much on AI could be dangerous especially with the amount of bias that is still present in facial recognition AI. Widely using AI in surveillance can also lead to issues related to privacy and possibly increased distrust in the government and the police. So as of now, the technology is not advanced or objective enough to be fully integrated into policing and surveillance efforts. More research and discussions are necessary to analyze how to best utilize AI in this context.
I am sceptical about using AI for security reasons. It breaches several privacy regulations and currently leads to a lot of identification mistakes. It might be useful for identifying criminals but its use should be closely monitored and regulated
I lean towards a favorable stance for AI in surveillance and policing. The increase in efficient monitoring techniques will drastically optimize our safe-keeping efforts and will allow the use of resources to be better spent (e.g., less police making sense of data, and more on the street keeping us safe). I don’t see how AI under the right legislation and guidelines will do a worse job than humans in terms of human biases; may even prove to be a good tool to counter biases instead. Of course, the technology has to be further tested and developed, but I believe if utilized in the right way the ethical concerns can be remedied, and even work towards having more objective monitoring systems.
I believe that there is a very thin line of utilizing the technologies as (presented cases shown) supports in surveillance/policing, and not preemptive approach overseeing everything which led to abundance of data set that might not be at all related, if not infringement of rights. They also posses very high rate of biases that put certain demographic in the presumption of guilt more often than not. Which will convolutes the data set that drives the algorithm even further. So unless we are under full awareness and able to take control of the use of these technologies within a certain accepted universal framework, I can for certain against using them as of yet.
AI technology as as a form of security surveillance should not be used. The infringement on one’s right on its own is enough to send most people against it (I.e the mask mandate). There’s also a lack of trust from the general public to their own government and police. Most people would be wary of the government misusing the data for other purposes despite promoting the new method as “safety” because of corruption. There are arguments regarding to prevention with transparency to algorithm and data’s, however each country’s IP rules are different and this will lead to inconsistency in policy and regulations. Lastly, the transparency will just promote weakening of the security system.
In my opinion, AI could and should be used in surveillance and policing, but not with the settings it currently has. I believe that in the future, we might create AI systems that are more efficient than humans for things like crime etc, and that would not be biased regarding gender and skin colour. While humans will forever be biased, AI are what they are fed with and can be created in different ways. I think we should continue our efforts to make them more reliable, for in fine being able to use them for this case. However, concerning the present moment, some drastic measures have to be taken for it to be acceptable to use AI in surveillance and policing.
As of my current opinion, the AI facial recognition technology (FRT) should be held properly in a decent regulation in each area and countries. All entities and public engagement need to be required for making a proper co-creation policies. So that, we as a public are being engaged and consented that our privacies being protected by all stakeholders.
In a positive way, I do agree on applying AI FRT’s in public spaces, because it contains a good benefit for us which can feel more safe from the criminal acts.
I believe that AI should not be used in policing due to the way that it can infringe on people’s privacy and lead to abuses of power. However, I believe there could be space for AI policing tools in the future if accuracy is improved and authorities are transparent about how they are being used.
I am not in favor of using AI as it feels like an invasion of privacy that is being normalized due to the compromise and breaching of privacy in the recent years by many companies!
I do not think that it is appropriate to use AI to recognize faces for police purposes due to the infringement of privacy and the failure to guarantee the accuracy of AI in identifying and recognizing the correct suspects. However, AI may help facilitate security monitoring if it is controlled, directed, and identified by a human behind it.
Sorry, i think i missed this one as the week before. I believe the moderate usage of AI for security reason are to some extent favorable. Until the point we could eliminate bias towards certain parties, complete usage of AI for security purpose are still difficult. Hence, I believe, the current situation would still favor using AI while adding a human labor check for consistency and non bias factor.
Considering the trade-off between privacy issues and crime prevention, I am against using AI in surveillance and policing. Since deploying such algorithms as facial recognition will need to store and analyse huge amounts of user’s private data, people’s privacy is seriously challenged by the risk of data breach (actually there are already cases). Also, the effectiveness of these AI approaches is questionable because the criminals will take countermeasures which may even make original digital forensics methods less effective.
「5」への 23 件のコメント
I am not in favor of using AI for security reasons to identify people. AI is not accurate enough to facially recognize people, therefore, basing police investigations and evidence on AI algorithms and analytics would not result in accurate outcomes. Furthermore, I believe the use of AI for police enforcement is also dangerous as AI facial recognition is racially biased and can lead to more violence from the police force against minorities. I don’t believe AI technology is evolved enough to be used for such precarious and delicate situations.
I believe the use of AI through facial recognition should not be used in state policing at this point in time. With the high error rates and increasing tension between governments and people I feel like imposing more a watchful eye over society that does not necessarily have a track record of success will only cause conflict and further mistrust in institutions at a time when social cohesion is already crumbling. Perhaps in the future with more development and for higher level crimes this could be a good opportunity but at this point in time the technological error and the privacy issues it concerns, I don’t agree with it.
hmmm although it sounds counter-intuitive and anti-democratic. it is far easier to pick faults in technology that appear to infringe on our personal freedom but it is equally important to look at statistics and data on how the tech has actually helped us already. without the help of security cameras, our success rate of finding a missing child after the initial 24 hours are next to none. without rapid identification it is next to impossible to locate serial murderers before they find their next victim. furthermore, it is not only used in criminal investigations. it is also widely utilized in the government, airports, schools, banking, workplaces and buses… for example, during covid 19, countries relied on the tech to make many services contactless. there is no doubt that the tech is here to stay and the question should not be whether we can ban it, and rather how we can move forward and increase transparency, public awareness, and formal regulations that protect our privacy and civil rights. cuz if we don’t, the state will probably just go ahead and use it secretly and that’s even worse..:(
From my own experience and knowledge, I am strongly against the use of AI technology for mass surveillance and state policing. It raises serious concerns about privacy and civil liberties. In my opinion It is very important to ensure that the use of such technology is transparent, accountable, and respects the rights of individuals. We must carefully consider the potential consequences and weigh the benefits against the risks before using such a risky measures.
I dont believe AI should be used in policing. Not only does it involve use of technology which has inbuilt biases detrimental to already marginalised community, but also it allows for policing to happen at places which would generally be considered as private moment of individual/society which in turn reduces quality of life
I believe that if AI were to be ethically incorporated into policing practices, that the social and economic factors leading to abuse of power and other poor policing practices need to first be addressed. The AIs used are biased because the police are biased. There is a Netflix documentary called “Coded Bias” that explores this issue.
I believe that AI can be incorporated into policing, but it should be done with high caution to social and privacy issues and incorporated solely. Well-incorporated it can increase safety, poorly incorporated it leads to a system where privacy is breached and people are being continuously monitored, which includes a risk of misuse and controlling people. There should be a high regulatory environment to determine where and how AI can be safely used in policing.
I am against the use of AI in state policing efforts. The utilization of AI in surveillance leads very easily to an overstepping citizens’ right to privacy in a world where, even without AI, many police forces already abuse their physical and bureaucratic power. Perhaps ethical AI systems should be developed to police the police–to ensure that they do not act with unnecessary brutality and bias, to protect citizens themselves rather than those who already have and absue their power.
I am against AI being used in policing. Although I feel like my answers are pretty similar every week, I think it still comes down to the idea that there’s not enough resources or research in our current society for us to be able to properly and ethically be able to incorporate AI into our daily lives. Aside from privacy issues, I do feel that there are many important merits with AI in this field such as the accuracy or decrease of bias, but as it was discussed in class, with what we have now, the data that the AI would pull from would be coming from pre-existing human data, which we know are full of error and bias. So similar to other discussions we have had, I feel that there may be a future where some helpful AI tools could be incorporated into policing, but there are still far too many things that need to be put into consideration for this to become normalized now.
Based on examples of social problems arising from the immaturity of facial recognition technology, it becomes evident that AI’s facial recognition capabilities are not yet developed enough to provide reliable assistance. Consequently, widespread adoption of such technology in surveillance and national policing should be reconsidered. Given the need for accuracy and minimal error in legal proceedings, I firmly oppose the current use of AI in police systems. Precision and thoroughness are crucial in matters of law, and until AI’s facial recognition reaches a higher level of maturity, caution should be exercised in its implementation.
I think it’s hard to pick one side in this debate, using AI in surveillance efforts helps to improve speed of identification and efficiency of investigation. It can help in prevention of crime by analyzing times and places that crimes are highly likely to happen. It can also assist in locating missing persons and improving the overall security of public spaces. But I think this kind of surveillance system brings up much more ethical concerns like; The possibility of false identifications, disproportionately impacting marginalized communities. Also these systems can become a tool for control or be misused by authorities; authorities can use it to exert control and domination over the population by monitoring every personal action and behavior of individual. I think in terms of these issues, in future, user empowerment should be considered more in designing these systems so there would be a balance between authority department’s power (police, etc.) and individual right.
I am currently against the use of AI in surveillance and state policing. Although I think that incorporating AI tools would be useful in facilitating investigations and otherwise providing assistance, relying too much on AI could be dangerous especially with the amount of bias that is still present in facial recognition AI. Widely using AI in surveillance can also lead to issues related to privacy and possibly increased distrust in the government and the police. So as of now, the technology is not advanced or objective enough to be fully integrated into policing and surveillance efforts. More research and discussions are necessary to analyze how to best utilize AI in this context.
I am sceptical about using AI for security reasons. It breaches several privacy regulations and currently leads to a lot of identification mistakes. It might be useful for identifying criminals but its use should be closely monitored and regulated
I lean towards a favorable stance for AI in surveillance and policing. The increase in efficient monitoring techniques will drastically optimize our safe-keeping efforts and will allow the use of resources to be better spent (e.g., less police making sense of data, and more on the street keeping us safe). I don’t see how AI under the right legislation and guidelines will do a worse job than humans in terms of human biases; may even prove to be a good tool to counter biases instead. Of course, the technology has to be further tested and developed, but I believe if utilized in the right way the ethical concerns can be remedied, and even work towards having more objective monitoring systems.
I believe that there is a very thin line of utilizing the technologies as (presented cases shown) supports in surveillance/policing, and not preemptive approach overseeing everything which led to abundance of data set that might not be at all related, if not infringement of rights. They also posses very high rate of biases that put certain demographic in the presumption of guilt more often than not. Which will convolutes the data set that drives the algorithm even further. So unless we are under full awareness and able to take control of the use of these technologies within a certain accepted universal framework, I can for certain against using them as of yet.
AI technology as as a form of security surveillance should not be used. The infringement on one’s right on its own is enough to send most people against it (I.e the mask mandate). There’s also a lack of trust from the general public to their own government and police. Most people would be wary of the government misusing the data for other purposes despite promoting the new method as “safety” because of corruption. There are arguments regarding to prevention with transparency to algorithm and data’s, however each country’s IP rules are different and this will lead to inconsistency in policy and regulations. Lastly, the transparency will just promote weakening of the security system.
In my opinion, AI could and should be used in surveillance and policing, but not with the settings it currently has. I believe that in the future, we might create AI systems that are more efficient than humans for things like crime etc, and that would not be biased regarding gender and skin colour. While humans will forever be biased, AI are what they are fed with and can be created in different ways. I think we should continue our efforts to make them more reliable, for in fine being able to use them for this case. However, concerning the present moment, some drastic measures have to be taken for it to be acceptable to use AI in surveillance and policing.
As of my current opinion, the AI facial recognition technology (FRT) should be held properly in a decent regulation in each area and countries. All entities and public engagement need to be required for making a proper co-creation policies. So that, we as a public are being engaged and consented that our privacies being protected by all stakeholders.
In a positive way, I do agree on applying AI FRT’s in public spaces, because it contains a good benefit for us which can feel more safe from the criminal acts.
I believe that AI should not be used in policing due to the way that it can infringe on people’s privacy and lead to abuses of power. However, I believe there could be space for AI policing tools in the future if accuracy is improved and authorities are transparent about how they are being used.
I am not in favor of using AI as it feels like an invasion of privacy that is being normalized due to the compromise and breaching of privacy in the recent years by many companies!
I do not think that it is appropriate to use AI to recognize faces for police purposes due to the infringement of privacy and the failure to guarantee the accuracy of AI in identifying and recognizing the correct suspects. However, AI may help facilitate security monitoring if it is controlled, directed, and identified by a human behind it.
Sorry, i think i missed this one as the week before. I believe the moderate usage of AI for security reason are to some extent favorable. Until the point we could eliminate bias towards certain parties, complete usage of AI for security purpose are still difficult. Hence, I believe, the current situation would still favor using AI while adding a human labor check for consistency and non bias factor.
Considering the trade-off between privacy issues and crime prevention, I am against using AI in surveillance and policing. Since deploying such algorithms as facial recognition will need to store and analyse huge amounts of user’s private data, people’s privacy is seriously challenged by the risk of data breach (actually there are already cases). Also, the effectiveness of these AI approaches is questionable because the criminals will take countermeasures which may even make original digital forensics methods less effective.