Investigative & Security Professionals for Legislative Action

Security Related Topics

<< First  < Prev   1   2   3   4   Next >  Last >> 
  • 08 May 2018 5:29 PM | Anonymous member (Administrator)

    Below is Exhibit 99.1 filed with the Security Exchange Commission relative to the cybersecurity hack of Equifax that resulted in the disclosure of personal identifying information (PII) relating to the ID, DOB, SSN of most U.S consumers. In addition, some breaches included credit card numbers with expiration dates, drivers licensee photos, passports and additional identifying data. It should be noted that Equifax has referred to the matter as a "Cybersecurity Incident" rather than a cybersecurity breach or cyber hack. - Bruce H. Hulme, CFE, BAI- ISPLA Director of Government Affairs    




    Over the past several months, congressional committees have requested information from Equifax regarding the extent of the cybersecurity incident that Equifax reported on September 7, 2017. Accordingly, Equifax submits this statement to supplement the company’s responses regarding the extent of the incident impacting U.S. consumers.

    As announced on September 7, 2017, the information stolen by the attackers primarily included:


        names, Social Security numbers, birth dates, addresses and, in some instances, driver’s license numbers of 143 million U.S. consumers (since updated)


        credit card numbers of approximately 209,000 consumers


        certain dispute documents with personal identifying information of approximately 182,000 consumers


        limited personal information for certain United Kingdom and Canadian residents.

    As earlier statements made clear, the company’s forensics experts found no evidence that Equifax’s U.S. and international core consumer, employment and income, or commercial credit reporting databases were accessed as part of the cyberattack. Furthermore, Equifax offered a comprehensive support package to impacted consumers on September 7, 2017.

    The attackers stole consumer records from a number of database tables with different schemas, and the data elements stolen were not consistently labeled. For example, not every database table contained a field for driver’s license number, and for more common elements like first name, one table may have labeled the column containing first name as “FIRSTNAME,” another may have used “USER_FIRST_NAME,” and a third may have used “FIRST_NM.” With assistance from Mandiant, a cybersecurity firm, forensic investigators were able to standardize certain data elements for further analysis to determine the impacted consumers and Equifax’s notification obligations.

    As a result of its analysis of the standardized data elements, including using data not stolen in the attack, the company was able to confirm the approximate number of impacted U.S. consumers for each of the following data elements: name, date of birth, Social Security number, address information, gender, phone number, driver’s license number, email address, payment card number and expiration date, TaxID, and driver’s license state. As stated above, Equifax notified the public on September 7, 2017 of the primary data elements that were stolen. With respect to the data elements of gender, phone number, and email addresses, U.S. state data breach notification laws generally do not require notification to consumers when these data elements are compromised, particularly when an email address is not stolen in combination with further credentials that would permit access. The chart that follows provides the approximate number of impacted U.S. consumers for each of the listed data elements.



    Data Element Stolen



    Columns Analyzed1

    Number of
    Impacted U.S.
    Name    First Name, Last Name, Middle Name, Suffix, Full Name    146.6 million
    Date of Birth    D.O.B.    146.6 million
    Social Security Number2    SSN    145.5 million
    Address Information    Address, Address2, City, State, Zip    99 million
    Gender    Gender    27.3 million
    Phone Number    Phone, Phone2    20.3 million
    Driver’s License Number3    DL#    17.6 million
    Email Address (w/o credentials)    Email Address    1.8 million

    Payment Card Number and Expiration Date

       CC Number, Exp Date    209,000
    TaxID    TaxID    97,500
    Driver’s License State    DL License State    27,000

    The data described above is not additional stolen data, and it does not impact additional consumers. The table reflects a summary of the company’s analysis of data stolen in last year’s cybersecurity incident. This includes the extra measures the company took to confirm the


    1  The attackers accessed records across numerous database tables with different schemas. Forensic investigators were able to standardize certain columns containing various types of information for further analysis to determine the impacted consumers and Equifax’s notification obligations. The full list of standardized columns is SSN, First Name, Last Name, Middle Name, Suffix, Gender, Address, Address2, City, State, ZIP, Phone, Phone2, DL #, DL License State, DL Issued Date, D.O.B., Canada SIN, Passport #, CC Number, Exp Date, CV2, TaxID, Email Address, Full Name.
    2  This represents the number of individuals who are part of the impacted population because their SSN was stolen. The impacted population included individuals with a SSN not stolen together with a name in jurisdictions that require notification in such circumstances (e.g., Indiana). Individual Tax ID numbers (ITINs) were generally housed in the same field as the SSNs. For clarity, all ITINs stored in the SSN field were included in the 145.5 million impacted population and consumers could use their ITIN in the lookup tool to see if they were affected. For approximately 97,500 individuals, the additional “TaxID” field contained a value that was stolen together with a SSN included in the lookup tool.

    This includes the 2.4 million individuals whose partial driver’s license information and name were stolen, as described in the company’s announcement on March 1, 2018.



    identities of U.S. consumers whose partial driver’s license information was stolen but who were not in the previously identified affected population, as announced on March 1, 2018. Equifax identified these consumers by referencing other information in proprietary company records that the attackers did not steal, and by engaging the resources of an external data provider.

    Through the company’s analysis, Equifax believes it has satisfied applicable requirements to notify consumers and regulators. It does not anticipate identifying further impacted consumers, as it has now completed analysis of government issued identification numbers stolen together with names. It should be noted that the additional analysis also confirmed that some of the standardized columns had no real data in the data fields (specifically the data fields for passport numbers, CV2s, and driver’s license issue dates).

    Separately from the elements described above, which were contained within database tables and files, and as previously reported in the company’s press releases4 and responses to congressional questions, the attackers also accessed images uploaded to Equifax’s online dispute portal by approximately 182,000 U.S. consumers. As a national credit reporting agency, Equifax has a statutory obligation to facilitate disputes for consumers.

    Between October and December 2017, Equifax notified by direct mail the consumers who had uploaded information to the dispute portal that their dispute information was accessed. In order to provide complete information to consumers regarding their accessed images, Equifax provided these consumers individualized notifications with a list of the specific files they had uploaded onto Equifax’s dispute portal and the dates of those uploads.

    As part of the dispute process, some consumers may have uploaded government-issued identifications through the portal. Because the company directly notified each impacted consumer, the company had not previously analyzed the government-issued identifications contained in the images uploaded in the dispute portal. In response to congressional inquiry, we recently completed a manual review of the images that were uploaded by the impacted consumers. The chart that follows provides the approximate number of images of valid government-issued identifications.


    Government-Issued Identification

       Approx. # of Images Uploaded  

    Driver’s License


    Social Security or Taxpayer ID Card


    Passport or Passport Card




    The data described above is not additional stolen data, and it does not impact additional consumers. The table reflects a summary of the company’s recent analysis of government-issued identifications that were uploaded by consumers to Equifax’s online dispute portal and stolen by the attackers.


    4  See, e.g., Equifax press releases dated September 7, 2017, and September 15, 2017,
    5  Includes other types of identification documents such as military IDs, state-issued IDs and resident alien cards.



    Equifax is committed to working with Congress and providing accurate information about the cybersecurity incident reported on September 7, 2017. Please let us know if you have questions about the information provided in this statement.



  • 15 Dec 2017 1:24 PM | Anonymous member (Administrator)

    U.S. Senate Report Reveals Internal Disagreements Over Funding Counterterrorism Programs

    The Democratic staff of the Senate Homeland Security and Governmental Affairs Committee on December 12th issued a report detailing the Administration’s intended funding cuts to Department of Homeland Security (DHS) state, local, and national counterterrorism programs based on an FY 2019 budget document DHS received from the Office of Management and Budget (OMB), which was provided to the Committee by a whistleblower.

    In late November 2017, a whistleblower disclosed a critical budget document related to the Department of Homeland Security (DHS) to the Democratic staff of the Senate Homeland Security and Governmental Affairs Committee. This document is titled the Department of Homeland Security Fiscal Year 2019 Budget and Policy Guidance and is dated November 28, 2017. It communicates OMB guidance from the President to the Department regarding its budget proposal. As the document states: “The following document provides fiscal year 2019 discretionary budget and policy guidance for the Department of Homeland Security (DHS). … The agency-specific discussions found below highlight those programs where budgetary levels reflect explicit changes in Administration policy.” It is important to note that the totals reflected in this document may not represent the President’s final budget.

    The Administration has proposed the elimination of $568 million worth of counterterrorism funding from DHS since the introduction of the FY 2018 and FY 2019 budget proposals compared to FY 2017 enacted levels. In the Administration’s FY 2018 budget proposal, $524 million of cuts to counterterrorism funding was proposed, with an additional $44 million in cuts proposed in the FY 2019 budget proposal. OMB Instructed DHS to Completely Eliminate Visible Intermodal Prevention and Response Teams and Cut $27 Million for Federal Air Marshals. Visible Intermodal Prevention and Response (VIPR) teams are multi-disciplinary groups of security officers deployed to various locations to prevent and deter acts of terrorism. VIPR teams typically consist of Federal Air Marshals, Behavioral Detection Officers, Transportation Security Specialists-Explosives, Transportation Security Inspectors, and canine teams. Based on current intelligence and threat information, VIPR teams are deployed to secure vulnerable areas by working closely with federal, state, and local law enforcement officials.

    Bruce Hulme, CFE, BAI - ISPLA Director of Government Affairs
    Resource to Investigative & Security Professionals

  • 21 Aug 2017 6:06 PM | Anonymous member (Administrator)

    Security professionals may wish to review a "precedential" August 15, 2017 opinion of the U.S. Court of Appeals for the Third Circuit, No. 16-3883 involving an armed security officer (often armed with AR-15 and authorized to use deadly force) in the matter cited below. The case involved the Americans with Disabilities Act (ADA) and regulations of the U.S. Nuclear Regulatory Commission (NRC).




    On Appeal from the United States District Court  for the Middle District of Pennsylvania

    (M.D. Pa. No. 4-13-cv-02612)

    District Judge: Honorable Matthew W. Brann


    Submitted Under Third Circuit L.A.R. 34.1(a) May 26, 2017

    Before: HARDIMAN, ROTH, and FISHER, Circuit Judges. (Opinion Filed: August 15, 2017 by Circuit Judge Hardiman)

    Daryle McNelis appealed the District Court’s summary judgment in favor of his former employer, PPL Susquehanna, LLC.  After this case was filed, McNelis’s former employer, misidentified in the caption as Pennsylvania Power & Light Company, was renamed Susquehanna Nuclear, LLC. He worked at PPL’s nuclear power plant as an armed security officer from 2009 until he was fired in 2012 after failing a fitness for duty examination. McNelis sued, claiming his termination violated the Americans with  Disabilities Act. The District Court disagreed, holding that McNelis was fired because he lacked a legally mandated job requirement, namely, the unrestricted security access authorization that the United States Nuclear Regulatory Commission requires of all armed security guards. They affirmed the judgment of the District Court.

    This appeal required the Third Circuit to analyze the relationship between the Americans with Disabilities Act (ADA) and the regulations promulgated by the Nuclear Regulatory Commission (NRC). They reviewed the governing regulations and then turn to the facts of the case. As the operator of a nuclear power reactor, PPL was required to comply with regulations issued by the NRC, two of which were seminal to this appeal.

    First, PPL was required to implement a “fitness for duty program” to ensure that "individuals are not under the influence of any substance, legal or illegal, or mentally or physically impaired from any cause, which in any way adversely affects their ability to safely and competently perform their duties.” 10 C.F.R. § 26.23(b). If an employee’s fitness is “questionable, the employer “shall take immediate action to prevent the individual from” continuing to perform his duties. 10 C.F.R. § 26.77(b).

    PPL also was required to maintain an “access authorization program” to monitor employees who had access  to sensitive areas of the plant. 10 C.F.R. § 73.56(a)–(b). Under this program, nuclear power plants must “provide high assurance” that employees “are trustworthy and reliable, such that they do not constitute an unreasonable risk to public health and safety or the common defense and security.” 10 C.F.R.§ 73.56(c). Before an employee is granted unrestricted access, he must undergo a psychological assessment that evaluates “the possible adverse impact of any noted psychological  characteristics on the individual’s trustworthiness and reliability.” 10 C.F.R. § 73.56(e). Once granted, unrestricted  access is subject to constant monitoring. Nuclear power plants must institute a “behavioral observation program” to identify aberrant behaviors. 10 C.F.R. § 73.56(f). All employees are required to report suspicious behaviors, and any report triggers a reassessment of that employee’s access. 10 C.F.R. § 73.56(f)(3). If during the reassessment an official believes the employee’s “trustworthiness or reliability is questionable,” the official must terminate the employee’s unrestricted access during the review period.

    PPL hired Daryle McNelis as a Nuclear Security Officer in 2009. In that role, McNelis had unrestricted access to PPL’s plant and was responsible for, among other things, protecting its vital areas and preventing radiological sabotage. McNelis carried a firearm (often an AR-15) and was authorized to use deadly force. In April 2012, McNelis experienced personal and mental health problems. McNelis was paranoid about surveillance. He believed that various items in his home (such as his children’s toy cars) were covert listening devices and he told his wife he would kill whoever was following him.

    McNelis also had problems with alcohol and his “use of alcohol [was] an issue of contention with his wife.” Finally, a close friend and co-worker of McNelis named Kris Keefer believed McNelis had become obsessed with bath salts—a synthetic drug that affects the central nervous system. McNelis had admitted to using bath salts in the past and coworkers suspected he was doing so again. In the midst of these troubles, McNelis’s wife moved herself and the children out of the family home. That same day, local police received an anonymous 911 call warning that McNelis may “come to the schools to get his children” and “may be under the influence and possibly armed.” The school district was locked down for two hours—but the police eventually determined that McNelis never intended to go to the schools.

    Two days later, McNelis agreed to meet his wife at a psychiatric facility for treatment. The treating physician’s initial evaluation noted that McNelis suffered from “paranoid thoughts, . . . sleeplessness, [and] questionable auditory hallucinations.”  After a three day stay in the inpatient unit, McNelis was discharged with instructions to “[d]iscontinue or reduce the use of alcohol.”

    During the events of April 2012, McNelis’s friend and co-worker Keefer became concerned by McNelis’s behavior. As required by NRC regulations and PPL policy, Keefer reported his concerns to a supervisor, explaining that McNelis was “emotionally erratic[,] . . . not sleeping well and having illusions” about surveillance.  Keefer also opined that McNelis’s behavior warranted “immediate attention.”

    Pursuant to NRC regulations, McNelis’s unrestricted access was “placed on hold” pending medical clearance.  McNelis then met with Dr. David Thompson—a third-party psychologist who performs fitness for duty examinations at approximately 20 nuclear facilities nationwide, including PPL’s plant. Dr. Thompson interviewed McNelis and performed testing required by PPL policy and NRC regulations. See 10 C.F.R. §§ 26.187, 73.56(e)(6). He then issued two reports, the second of which—a Substance Abuse Expert Determination of Fitness report—stated that “McNelis is considered not fit for duty pending receipt and review of a report from the facility where he receives an alcohol assessment and possibly treatment.”

    Upon learning that McNelis had been deemed not fit for duty by Dr. Thompson, PPL revoked McNelis’s unescorted access authorization and terminated his employment. After his internal appeal was denied, McNelis filed this suit. The District Court granted PPL summary judgment and McNelis timely appealed.

    The full decision is at:

    Bruce H. Hulme, CFE, BAI - ISPLA Director of Government Affairs


  • 24 Jul 2017 7:47 PM | Anonymous member (Administrator)

    "The methods that law enforcement use to access data outside their jurisdiction are outdated, and if left unaddressed, risk damaging international comity, U.S. competitiveness, and the global Internet economy."

    How Law Enforcement Should Access Data Across Borders


    In late 2013, U.S. federal law enforcement officials obtained a warrant as part of an anti-narcotics investigation to seize the contents of an email account belonging to a Microsoft customer whose data the company stored in Dublin, Ireland.

    Microsoft refused to comply with the order, arguing that the U.S. government cannot force a private party to do what U.S. law enforcement has no authority to do itself: use a warrant to conduct a search and seizure operation on foreign soil.

    This case exposed the cracks in the foundation of the current framework used by law enforcement agencies to access digital information and determine jurisdiction on the Internet. Moreover, attempts to resolve this dispute risk either hamstringing law enforcement efforts or distorting the global market place for digital services. This report explains the problems with the status quo, describes the limitations of existing proposals, and offers an alternative framework to resolve these issues along with a set of recommendations to operationalize this framework not just within the United States, but globally.

    This report builds from a previous ITIF report offering a framework on how nations should engage in Internet policymaking given the global nature of the Internet.

    It makes specific recommendations for how governments can use this framework to establish policies for law enforcement to access data. This report also assesses theoretical approaches to establish jurisdiction over that data, focusing on cross-border law enforcement requests, and not clandestine intelligence gathering for national security purposes. The framework herein is not intended for law enforcement requests for metadata (data that describes information about a communication).


    To operationalize the proposed framework, policymakers should pursue the following actions:

    • Modernize the internal processes for responding to foreign requests for legal assistance;
    • Work with other governments to draft and adopt model MLAT 2.0 language;
    • Push back against foreign data-localization requirements;
    • Update the Electronic Communications Privacy Act (ECPA) to protect domestic digital communications;
    • Restrict companies from storing data in countries with conflicting laws that limit law enforcement;
    • Engage with other nations to develop a “Geneva Convention on the Status of Data.”


    Law enforcement officials investigating crimes often want to gain access to an individual’s data, such as emails or files. But determining where data is stored can be complicated because it can be stored in a variety of different ways and locations. Sometimes companies store customer data in data centers that are located exclusively in a single country. Other times they store data on servers located in a foreign nation. Similarly, companies store customer data in data centers located in multiple countries, splitting data across multiple data centers to provide faster access to data, ensure data is always accessible, and prevent data loss.

    In this latter case, law enforcement officials will have more difficulty gaining access to personal communications data because it is stored in multiple jurisdictions. And, to make matters more complicated, in addition to the location of data, the location of the company storing that data and the nationality and/or residency of the person or persons to whom the data belongs all can vary. And yet, law enforcement must untangle this complicated web for every case in which it wants to seek lawful access to data. Law enforcement officials have two paths that they can use to compel access to data during criminal investigations.

    First, law enforcement officials can use domestic legal authorities to access data, such as search warrants and subpoenas. While U.S. law generally allows U.S. law enforcement to compel companies to turn over their own business records store d overseas, it is still an open question as to whether U.S. law enforcement can compel companies to provide their customers’ data through these processes.

    Second, in some cases where the evidence is not located within their jurisdiction, law enforcement officials may work through international processes, such as treaties for mutual legal assistance or police-to-police cooperation agreements, to access that data.

    U.S. policymakers should ensure law enforcement agencies can gain lawful access to information to protect their citizens and uphold U.S. laws, but without disadvantaging U.S. companies and workers facing global competition. Achieving this will require modernizing the process by which governments around the world obtain data stored outside their borders. Existing legal processes and treaties are woefully out of date and needlessly complex. Countries have mismatched legal assistance treaties, conflicting laws, and differing norms. Indeed, there is currently no comprehensive framework for how to successfully navigate cross-border jurisdictional disputes, especially those involving the digital economy. Such a patchwork of laws and rules may have been somewhat acceptable before the advent of the digitally-integrated global economy. Now they are not. No one nation can solve this problem alone. Settling questions of jurisdiction over data will require global reforms. However, the United States can and should lead the way on these reforms, and this report offers a path forward.

    For the complete 38-page report go to:

    Bruce Hulme, CFE, BAI

    ISPLA Director of Government Affairs

  • 12 Jun 2017 10:13 AM | Anonymous member (Administrator)

    ISPLA is grateful to Stratfor for providing the following item by Scott Stewart, VP of Tactical Analysis. He is a former U.S. Department of State Special Agent and supervises Stratfor's analysis of terrorism and security issues.


    If you're an American, you don't want to be taken hostage. Since 2001, 90 Westerners have been kidnapped and killed overseas, and according to a January study from New America, 41 of them were Americans. That American deaths are disproportional to the number of total hostages raises the question: Why not negotiate?

    In the study titled "To Pay Ransom or Not to Pay Ransom? An Examination of Western Hostage Policies," authors Christopher Mellon, Peter Bergen and David Sterman examined the cases of 1,185 Westerners kidnapped overseas by terrorist, militant and pirate groups since Sept. 11, 2001. The study reached two conclusions: "First, countries that do not make concessions experience far worse outcomes for their kidnapped citizens than countries that do. Second, there is no evidence that American and British citizens are more protected than other Westerners by the refusal of their governments to make concessions."

    The study then made the following policy recommendations:

    1. The United States should clarify its stance on granting immunity from prosecution to third parties that assist the families and friends of hostages held by terrorists.
    2. The United States should facilitate prisoner exchanges for its citizens kidnapped abroad.
    3. The United States should encourage more data-driven study of hostage taking.
    4. The United States should evaluate the degree to which the rise of digital media has changed the cost-benefit analysis underlying its hostage policy.

    I had the privilege of debating one of the authors, Mellon, on the efficacy of these policy recommendations at a May 24 meeting of the Faith-Based Organizations Working Group, which is part of the Overseas Security Advisory Council. While deliberating a topic isn't normally within the scope of this column, U.S. hostage policy is of keen interest to nongovernmental organizations (NGOs), corporations, families of current hostages and private negotiators.

    Examining the Recommendations

    First, I agree with the study's recommendation of granting immunity from prosecution to third parties that assist families. A public uproar arose after senior officials in U.S. President Barack Obama's administration threatened to prosecute the families of James Foley and Steven Sotloff, both journalists captured in Syria, if they paid a ransom to the Islamic State. So in June 2015 the Obama administration altered the policy, saying families would not be prosecuted — a welcome change. Such prosecutions have zero jury appeal, and it is unconscionable to threaten American families as they endure the anxiety of trying to free a kidnapped child.

    Furthermore, there is a great deal of disparity in the way the U.S. law applied to families depending on who the kidnappers were. For example, if al Shabaab, a designated foreign terrorist organization, kidnapped a family member in Somalia, a person could be charged with material support of a terrorist group if he or she paid a ransom. However, if Somali pirates kidnapped the family member, there would be no fear of being charged because pirates are not designated as terrorists. The only problem with the updated policy is that it is not a law and can be changed on a whim. Consequently, it needs to be codified. The policy, moreover, is unclear when it comes to companies, NGOs and private negotiators. There has never been a clear-cut statement on whether a company, NGO or private negotiator will be charged after paying a ransom to a terrorist group to free a kidnapped employee (American or otherwise).

    Finally, I have no qualms with the third and fourth policy suggestions. More research on the subject is always a good thing.

    A Critical Look at the Study

    One problem with the methodology of the study arises when the authors fail to account for the cases in which Americans were abducted, but nothing reasonable — or nothing at all — was demanded for their release. For example, a demand to "release all Iraqi captives and completely pull all American troops out of Iraq" is not reasonable, and the captors certainly did not expect it to be met. This means that many (we count at least 14) of the 41 Americans who died in captivity were killed strictly for propaganda purposes.

    As the sole global superpower, the United States is seen as the "Great Satan" by Iran and its militant proxies, and jihadists single out America for special hatred because it is viewed as the "head of the snake," or the leader of the crusader coalition. Al Qaeda believes it cannot establish a caliphate until the United States is driven from the Muslim world by terrorism and guerrilla warfare. Killing Americans in propaganda videos is seen as a way to achieve this end.

    Some examples of the propaganda executions of Americans include Daniel Pearl, Nicholas Berg, Paul Johnson, Cydney Mizell and Owen Armstrong. Several British citizens have been killed for the same reason as well, including Kenneth Bigley, Jason Swindlehurst, Jason Creswell, Alec MacLachlan and David Addison. It is also unclear whether the payment of a ransom would have led to the release of American hostages Sotloff, Foley, Peter Kassig and Kayla Mueller. The Islamic State may have deemed their propaganda value greater than any potential payout.

    If you remove hostages who never had a realistic chance of being freed via ransom or prisoner swap, the study's statistics begin to look quite different.

    Second, it's a bad idea for the U.S. government to exchange prisoners for hostages. Direct negotiations with terrorists give them an air of importance and parity, and government involvement inflates the value of hostages, increasing the incentive to take them. This inflation has been quite apparent in ransoms paid by governments in the Sahel over the past decade. Terrorists understand that a government has much deeper pockets than a family or NGO.   

    But Washington has not always had a policy of refusing to negotiate with terrorists. The administration of President Richard Nixon first adopted it during the 1973 seizure of the Saudi Embassy in Khartoum, Sudan. In the attack, U.S. Ambassador Cleo Noel Jr. was killed by the Black September Organization. Prior to the incident, Washington's policy had been to encourage governments to negotiate with terrorists in order to free American hostages.

    By confining the study to the post-9/11 era, the authors missed a significant lesson that the administration of President Ronald Reagan learned in the mid-1980s when it abandoned the no-concessions policy. Instead, it tried to follow the Israeli model of negotiation in the arms-for-hostages portion of the Iran-Contra scandal, which landed Reagan officials in hot water. Reagan’s team tried to use the money from Iranian arms sales to support the Nicaraguan Contras. The drive for negotiations was prompted by the 1984 abduction of CIA station chief William Buckley in Beirut.

    Even when separated from the Nicaraguan portion of the deal, the arms-for-hostages part of Iran-Contra was a bad policy. The arms deals succeeded in gaining the release of Benjamin Weir, Lawrence Jenco and David Jacobsen — three of the seven Western hostages then held in Lebanon. However, after their release, Hezbollah quickly restocked its supply of hostages and kidnapped eight more Westerners. In 1985, the Reagan administration sought to use the Israeli model again after the hijacking of TWA Flight 847. The United States worked with Israel to release 700 Shiite prisoners in exchange for the aircraft and its passengers. This exchange influenced Hezbollah's expectations regarding the Lebanon hostages, and it boosted the hopes of terrorists involved in later hijackings, including EgyptAir Flight 648, Pan Am Flight 73 and Kuwait Airways Flight 422.

    For some Hezbollah leaders, such as Imad Mughniyah, the kidnappings had a personal element because they helped free friends and relatives. Mughniyah’s brother-in-law and friend Mustafa Badreddine and 16 other accomplices, known as the Dawa 17, were imprisoned in Kuwait for the December 1983 bombing of the U.S. Embassy. These events convinced the U.S. government that it was time to return to its policy of no concessions.

    Government involvement in prisoner swaps can cause other problems as well, as illustrated by the case of 1st Lt. Muath al-Kaseasbeh. Al-Kaseasbeh was a Jordanian pilot who was shot down near Raqqa, Syria, on Dec. 24, 2014, and captured. The Islamic State demanded the release of Sajida Mubarak al-Rishawi, a female jihadist who participated in a failed suicide bombing in Amman in 2005 for al Qaeda in Iraq, in exchange for him and Japanese hostage Kenji Goto. The Islamic State negotiated for their release for several weeks with the Jordanian and Japanese governments; all the while al-Kaseasbeh was dead. The Islamic State had burned him to death — and had produced a long propaganda video of the gruesome execution. When a government negotiates, even the talks can be strung out and used for propaganda.

    Proving a Negative

    One challenge that all governments and security directors face is proving a negative. What events did our policies prevent? It is very difficult to prove what did not happen.

    There are some anecdotal cases in which Washington's no-concession policy helped dissuade a kidnapping. One happened shortly after El Sayyid Nosair was arrested in the November 1990 assassination of Rabbi Meir Kahane. A group of Nosair’s friends and supporters — many of whom would later go on to play significant roles in the 1993 World Trade Center bombing — explored ways to get him out of New York’s Attica prison. One idea involved kidnapping former Secretary of State Henry Kissinger and exchanging him for Nosair. Fortunately for Kissinger, the U.S. no-deal policy led them to scrap the plot.

    Finally, the government did, as the report’s authors note, conduct a prisoner swap for U.S. soldier Bowe Bergdahl. But there is a big difference between someone who voluntarily enters a war zone, such as a journalist or aid worker, and someone who is ordered to go by their government. When a soldier or diplomat is sent into a dangerous environment, the government has a special duty to do everything in its power to get the hostage released, even in the case of Bergdahl, who was captured under "murky" circumstances. As the inflation principle of government involvement suggests, however, his freedom came at a price: The United States released five senior Taliban members for one U.S. soldier who is now facing desertion charges.

  • 06 Mar 2017 3:58 PM | Anonymous member (Administrator)

    Researchers can predict terrorist behaviors with more than 90% accuracy 2017-03-02

    New framework developed by Binghamton University researchers could help understand terrorist behaviors and detect suspicious attacks

    BINGHAMTON, NY–Government agencies cannot always use social media and telecommunication to uncover the intentions of terrorists as terrorists are now more careful in utilizing these technologies for planning and preparing for attacks. A new framework developed by researchers at Binghamton University, State University of New York is able to understand future terrorist behaviors by recognizing patterns in past attacks.

    Researchers at Binghamton have proposed a comprehensive new framework, the Networked Pattern Recognition (NEPAR) Framework, by defining the useful patterns of attacks to understand behaviors, to analyze patterns and connections in terrorist activity, to predict terrorists’ future moves, and finally, to prevent and detect potential terrorist behaviors.

    Using data on more than 150,000 terrorist attacks between 1970 and 2015, Binghamton University PhD student Salih Tutun developed a framework that calculates the relationships among terrorist attacks (e.g. attack time, weapon type) and detects terrorist behaviors with these connections. Mohammad Khasawneh, professor and head of the Systems Science and Industrial Engineering (SSIE) department at Binghamton University, assisted and advised Tutun with his research. Jun Zhuang, an associate professor and director of undergraduate studies in the Department of Industrial and Systems Engineering at the University at Buffalo, also contributed to this research. In the framework, there are two main phases: (1) building networks by finding connections between events, and (2) using a unified detection approach that combines proposed network topology and pattern recognition approaches. Firstly, the framework identifies the characteristics of future terrorist attacks by analyzing the relationship between past attacks. Comparing the results with existing data shows that the proposed method was able to successfully predict most of the characteristics of attacks with more than 90% accuracy.

    Moreover, after building the network with connections, the researchers propose a unified detection approach that applies pattern classification techniques to network topology and features of incidents to detect terrorism attacks with high accuracy, and identify the extension of attacks (90 percent accuracy), multiple attacks (96 percent accuracy) and terrorist goals (92 percent accuracy). Hence, governments can control terrorist behaviors to reduce the risk of future events. The results could potentially allow law enforcement to propose reactive strategies, said Tutun.

    "Terrorists are learning, but they don’t know they are learning. If we can’t monitor them through social media or other technologies, we need to understand the patterns. Our framework works to define which metrics are important," said Tutun. "Based on this feature, we propose a new similarity (interaction) function. Then we use the similarity (interaction) function to understand the difference (how they interact with each other) between two attacks. For example, what is the relationship between the Paris and the 9/11 attacks? When we look at that, if there’s a relationship, we’re making a network. Maybe one attack in the past and another attack have a big relationship, but nobody knows. We tried to extract this information."

    Previous studies have focused on understanding the behavior of individual terrorists (as people) rather than studying the different attacks by modeling their relationship with each other. And terrorist activity detection focuses on either individual incidents, which does not take into account the dynamic interactions among them; or network analysis, which gives a general idea about networks but sets aside functional roles of individuals and their interactions.

    "Predicting terrorist events is a dream, but protecting some area by using patterns is a reality. If you know the patterns, you can reduce the risks. It’s not about predicting, it’s about understanding," said Tutun.

    Tutun believes that policymakers can use these approaches for time-sensitive understanding and detection of terrorist activity, which can enable precautions to avoid against future attacks.

    "When you solve the problem in Baghdad, you solve the problem in Iraq. When you solve the problem in Iraq, you solve the problem in the Middle East. When you solve the problem in the Middle East, you solve the problem in the world," said Tutun. "Because when we look at Iraq, these patterns are happening in the USA, too."

    The paper, "New framework that uses patterns and relations to understand terrorist behaviors," was published in Expert Systems with Applications.

  • 30 Dec 2016 12:07 PM | Anonymous member (Administrator)

    The following release provided to ISPLA is a rather long news clip on bank fraud schemes.  Interesting to see where future problems may arise regarding Automated Clearing House (ACH) procedures.

    American Banker: Faster ACH Payments Strain Bank Anti-Fraud Systems

    By Penny Crosman - December 29, 2016

    Faster ACH payments are taxing banks' ability to check for fraud and criminals are taking notice.

    As of September 2016, credit-based ACH payments are now being settled within the same day. These are transactions where one person or entity is pushing money from their bank account to another person or organization, using the automated clearinghouse. Examples include direct deposit, payroll, person-to-person and vendor payments.

    Where before banks had two to five days to analyze suspicious transactions, now in some cases they have only two hours. Banks haven't quite caught up with the shorter time frame for checking red flags, some say, and fraudsters have jumped on this opportunity.

    "Recently we've seen more evidence of incidences of ACH fraud than we have in the past," said Andrew Davies, a vice president at Fiserv who helps financial institutions worldwide spot potentially illegal transactions.

    Davies has seen recent cases of malicious software tampering with ACH files to perpetrate fraud. For instance, hackers are manipulating payroll files and adding themselves as fake employees to collect money. Some of the cases have been in the U.S. 

    Some banks' systems don't sufficiently scrutinize ACH files

    "A lot of their fraud filters will not necessarily have the wherewithal to break out all the transactions, look at history of the accounts on the incoming and outgoing side, look at the batches within the file, and then look at the behavior associated with the overall file from an ACH perspective," Davies said.

    Money lost this way will be difficult to recover

    "Anytime you push money out, it's really hard to pull it back," said Ruston Miles, founder and chief innovation officer of Bluefin Payment Systems, a payment processor. For instance, "if it's a payroll file, the money has been pushed out, and you can't go out to the customer and pull it back."

    A lot of fraud monitoring is still done manually, Miles said.

    "Most banks have electronic fraud detection systems that catch transactions that don't look right and put them in an exception bin, and these banks employ floors of people who inspect the flagged transactions," Miles said. "With same-day, all that time gets crunched down, so you either have to add more people or you have to open the floodgates on your fraud detection systems or you've got to get more picky about fraud detection."

    Along with faster settlement, the increasing interconnectedness of international payment systems taxes fraud investigators' skills and resources. The fact that dozens of countries are increasing the speed of payment transactions brings an increased level of risk.

    "If you're settling transactions between financial institutions more frequently or in shorter time frames, and you have too many false positives or you have a limited amount of resources to remediate unusual activity, the funds … may well have moved on to South Korea in a relatively short time frame, and you're still sitting on an alert you haven't had a chance to look at," Davies said.

    "I wouldn't say banks are scrambling but there's increased focus and understanding of the elevated risks associated with those transactions," Davies said.

    In a way, this problem isn't new. There have long been different speeds for ACH payments. Also, in some cases you can pay to expedite ACH or bill payments.

    "Many financial institutions have found that if criminals can pay a fee for expedited processing, they don't mind paying the fee, and you see a shift in many cases to these quicker mechanisms," said David Pollino, deputy chief security officer for Bank of the West.

    He points out that there's an upside: Now banks have a way to risk-stack their products, knowing that the faster services are inherently more attractive to criminals.

    Jane Larrimer, executive vice president of ACH network administration at Nacha, said she is not aware of increased fraud over the network (Nacha is refers itself as "The Electronic Payments Association" and was formerly the National Automated Clearing House Association.).

    "We have not heard that at all," she said. "It's been amazingly quiet." Bank members worked to make sure they had robust risk and fraud systems during the 16-month lead-up to the faster credit payments.

    "They did that work and they were ready to go on phase 1," Larrimer said.

    Banks aren't required to report ACH-related fraud to Nacha. "But if there was some upswing, we do hear things," Larrimer said.Pollino is also unworried about the threat of fraudsters breaking in and changing ACH files, because doing so takes a lot of work. Phishing attacks are still the biggest fraud concern at Bank of the West.

    "Why hack into a system, understand a complex financial package, figure out where that file is and then change the file if you can just email the person and ask them for the money?" he said.

    Next Challenge: Same-Day ACH Debits

    Same-day ACH debit payments, which go into effect Sept. 15, 2017, will be even trickier for fraud prevention teams.

    ACH debit transactions typically take two to three days to clear and settle, noted Steve Mott, principal of BetterBuyDesign, an advisory firm in Stamford, Conn. And banks' fraud systems take full advantage of that window.

    "Some would say it's a lazy way, because it takes advantage of the time to say, 'OK, I don't have to check this stuff until I come in on Monday morning,' " Mott said.

    The banks' fraud systems, controls and secondary and tertiary checks all assume the bank has plenty of time to perform those checks. Those will need to be updated.

    "What's happened historically is that none of the financial institutions have wanted to change much in the way they did faster and more secure stuff through the pipes until they absolutely have to," Mott said.

    Power of the Bank Account Number

    In a faster-ACH-payments world, the bank account number becomes more powerful because it can be turned into cash more quickly.

    To date, bank account numbers have been worth less than credit card numbers in the black market because they've been harder to use.

    With same-day settlement, fraudsters will be able to use bank account numbers to make real-time purchases, such as software, movie and song downloads, and receive the items before a bank can stop them.

    "If fraud starts really going there and merchants start losing, merchants will either have to add anti-fraud detection systems themselves or they may turn away from ACH payments for any real-time or near-real-time transfers, because they can't be assured of the funds," Miles said. 

    Americans are fairly casual about writing and sending checks, which have our full account number printed at the bottom, to anyone because of the built-in protections of time, Miles said. I recently sent a yearend tip by check to the person who delivers my newspaper. This is someone I've never met, who lives in a town I've never been to, and for all I know she could be a petty criminal. Now she has my checking account number and my bank name and routing number, as well as my address and signature.

    "Now we're taking out that time buffer, making this twice a day, same day, meaning that it's more convenient and easier for fraudsters to capitalize on the account numbers."

    But account numbers printed on checks are unlikely to be a large-scale problem, Miles pointed out.

    "Hackers want to automate these attacks; they don't want to dig through the trash all over the country to steal a million check numbers," he said. "They want to open their laptop and see that 10,000 bank account numbers were found over the past week, through automated attack tools. So that's the big threat."

    Miles suggested the banking industry needs to develop security standards like PCI. "The best way to fix the problem is to not have the fraudsters get their hands on the bank account numbers in the first place, and that comes through data security and not through authentication," Miles said. For instance, the PCI data security standard requires that payment card data be encrypted at all times; this same rule could help protect bank account data. Tokenization of account numbers could also help, he said.

    Continual Improvement

    As ACH payments continue to get faster, along with FedWire, Chips, and other types of payments, banks are going to have to step up their fraud analytics and security efforts accordingly. Those processes will need to be continually improved, too, Pollino said.

    "As soon as you're happy with your controls, the criminals will get happy with them as well because they'll figure out a way around them," he said.

    Nacha members have been upgrading their risk processes and procedures, Larrimer said. "Same-day is the tipping point," Larrimer said. "We're the first movement in faster payments. So they're starting here and I don't think this is the end of it."

    She also noted that faster payments can lower transaction risk, especially credit and operations risk.

    "And the faster you can settle things on the system, that lessens the systemic risk," she said.

    One thing banks need to do is understand how the criminal rings that target them work, Pollino suggested.

    "Are they looking for the small, quick score or are they looking for the larger, long-term payoff?" he said. "Criminals looking for the quick, small score might be drawn toward this type of product." The bank's fraud analytics and fraud detection strategies need to be tuned to that.

    Third-party data sets become increasingly useful to help vet the parties to a transaction, Pollino said. Names, phone numbers, email addresses and account numbers can all be checked against databases run by Early Warning Services, LexisNexis, Experian and others. 

    "It's becoming more and more important to understand where this money is going, who's at the other end of the transaction," Pollino said. "Does your customer know who's at the other end of the transaction? What personal information is included in a transaction?"


    Bruce Hulme, CFE, BAI

    ISPLA Director of Government Affairs

    ISPLA: Keeping Investigative & Security Professionals Informed of Emerging Issues

  • 17 Nov 2016 10:56 AM | Anonymous member (Administrator)

    Prioritizing Internet of Things (IoT) Security

    While the benefits of IoT are undeniable, the reality is that security is not keeping up with the pace of innovation. As we increasingly integrate network connections into our nation’s critical infrastructure, important processes that once were performed manually (and thus enjoyed a measure of immunity against malicious cyber activity) are now vulnerable to cyber threats. Our increasing national dependence on network-connected technologies has grown faster than the means to secure it.

    The IoT ecosystem introduces risks that include malicious actors manipulating the flow of information to and from network-connected devices or tampering with devices themselves, which can lead to the theft of sensitive data and loss of consumer privacy, interruption of business operations, slowdown of internet functionality through large-scale distributed denial-of-service attacks, and potential disruptions to critical infrastructure.

    Last year, in a cyber attack that temporarily disabled the power grid in parts of Ukraine, the world saw the critical consequences that can result from failures in connected systems. Because our nation is now dependent on properly functioning networks to drive so many life-sustaining activities, IoT security is now a matter of homeland security.

    Overview of Strategic Principles

    Many of the vulnerabilities in IoT could be mitigated through recognized security best practices, but too many products today do not incorporate even basic security measures. There are many contributing factors to this security shortfall. One is that it can be unclear who is responsible for security decisions in a world in which one company may design a device, another supplies component software, another operates the network in which the device is embedded, and another deploys the device. This challenge is magnified by a lack of comprehensive, widely-adopted international norms and standards for IoT security. Other contributing factors include a lack of incentives for developers to adequately secure products, since they do not necessarily bear the costs of failing to do so, and uneven awareness of how to evaluate the security features of competing options.

    Below is a link to a 17-page November 15, 2016 report by the U.S. Department of Homeland Security entitled "Strategic Principles for Securing the Internet of Things (IoT). It sets forth ways to organize strategies to address IoT security challenges.


  • 02 Mar 2016 10:54 AM | Anonymous member (Administrator)

    The item below was furnished to ISPLA from a regulatory agency having jurisdiction over the financial services industry. It outlines various pretexts used against banks and others to obtain personally identifiable information (PII) of their customers. - Bruce Hulme, ISPLA Director of Government Affairs

    American Banker: To Case the Joint, Press 1: Crooks Refocus on Bank Call Centers - By Penny Crosman - March 1, 2016

    The often-overlooked call center is getting more attention, as banks realize that stronger security on online and mobile channels has driven cybercriminals to focus their energies on conning phone reps.

    They're tricking these eager-to-please call center agents into coughing up customer information or letting them reset passwords on other people's accounts.

    "Fraudsters will always use the weakest plank in the door," said Gary McAlum, chief security officer at USAA. "If you're using strong authentication security but someone can call into a call center and social-engineer through the call center representative to reset their account, then that's the weak point in the network. It has to be an end-to-end holistic approach."

    This problem made news when Apple Pay came out in September 2014. There was an immediate rash of call center fraud, as cybercriminals realized they could set up accounts using stolen credit card data. The problem has steadily grown since then.

    Last year, one in every 2,900 calls coming into large banks' call centers was fraudulent, according to Pindrop Security. This year, the number is closer to one in every 2,000 calls. Among regional banks, it's more like one in 700. Pindrop's software analyzes incoming calls for signs of fraud and scores them for risk. For instance, if a call is coming from Nigeria and the same caller number has called the contact center for different accounts, it will probably end up with a high risk score. (Pindrop was one of American Banker's Tech Companies to Watch in 2013 and it recently received $75 million from Google Capital. Its customers include eight of the top 15 U.S. banks.) The company will release this year's fraud report in April but gave American Banker a few numbers in advance.

    The average fraud exposure caused by these hackers — that is, the average amount they could potentially steal after successfully logging in by gaming the call center — was $7.6 million per bank in a 2014-15 study. More recently, in a study that covered the 12 months through February, it was $11 million per bank, according to Pindrop.

    So the attackers have been able to expand the pools of money that they can reach by over 45%.

    "When we're working with customers, we're finding about 30% to 80% of all fraud has a phone component," said Vijay Balasubramaniyan, Pindrop's CEO and chief technology officer.

    Bankers are generally tight-lipped about sharing what technology they're using to better secure their call centers.

    "The more information you provide to the fraudsters, the better [equipped] they are to perpetrate their fraud," said Brett Beranek, director of product strategy for voice biometrics at Nuance Communications. His company's technology analyzes incoming calls for fraud, detecting mismatches between the caller and previous recordings tied to the same account. It can also spot people calling about multiple accounts and fraudsters whose voices are on a blacklist. "The more information is disseminated, the less effective fraud groups can be at stopping the fraudsters."

    Canada's Tangerine Bank recently invested in secure chat software to allow call center agents to have encrypted, archived chat sessions with authenticated customers, according to the bank's chief information officer, Charaka Kithulegoda.

    Patience and PII

    One reason call centers are facing a rise in fraud attempts is the prevalence of personally identifiable information, McAlum observed.

    Fraudsters painstakingly gather information about account holders on the Web and use it to manipulate customer service agents who are trained to be helpful, not to block crime. The fraudster might say, "I don't remember my own password, let me call you right back." Then he'll go out to social media sites and figure it out. 

    "One call center agent completely buckled and started reading out every single account transaction on [a customer's] account for the last month," Balasubramaniyan said. "Though [the fraudster] didn't manage to get a wire at that point, now that he had all his transactions, he called back in, and when the next call center agent said, 'How do I trust you?' he started rattling off these transactions. The call center agent said, 'OK, it must be you,' and let him through."

    Balasubramaniyan's all-time favorite call was from a fraudster who, when asked, "What's your mother's maiden name?" replied, "My dad married thrice, so can I take three guesses?"

    "It doesn't even make sense — so what, your dad married thrice?" Balasubramaniyan said.

    The call center agent allowed him to take three guesses, the last of which was "Smith," which is one of the most popular names in the world and happened to be right. After that call, he wired $97,000 out of the bank. 

    Beranek said by closely monitoring what goes on in the call centers, banks can learn how fraudsters operate.

    "Often a fraudster will call in several times and progressively increase the complexity of their calls," he said. "So for call No. 1, they would ask for a benign piece of information that would be very easy to socially engineer the contact center agent to provide. By call five or seven, they have amassed enough information that they could completely take over the account, go online and perform a wire transfer."

    Fraudsters often need several attempts to break into accounts, because as they search the Web for information on account holders, sometimes the data they get is correct, sometimes it isn't.

    IVR Reconnaissance

    In addition to live agents being fooled by fraudsters, there's an uptick in the gaming of automated interactive voice response systems, or IVRs. Cybercriminals can robo-call IVRs continuously to guess a PIN number. (If it's four digits, there are 10,000 possible combinations.)

    In 2014, only 47% of calls to banks went through IVR systems. This year, more than 60% of calls will, according to Pindrop, as banks are cutting back on live agent calls. (It behooves Pindrop to point all this out, as it's getting ready to release an IVR security system that will act similarly to its call center software.)

    There isn't always fraud happening within the IVR itself, Balasubramaniyan said. "What the IVR is great in is reconnaissance, which is finding out about an account without talking to a call center agent," he said. It's also good for trying different combinations of account numbers, PINs and card verification values (those three-digit codes on the backs of payment cards) without coming up on any radar.

    "If you're able to detect that activity, you can forewarn banks on average 30 days before account takeover even starts happening," Balasubramaniyan said. "It's almost like 'Minority Report,' " the science fiction movie about a clairvoyant police force.

    In addition to security software, of course, part of the answer is to make call center agents more aware of social engineering and help them look for signs of foul play. One of our cybersecurity predictions for 2016 was that banks and other companies would address the problem of fraudsters' easily being able to reset passwords.

    The hard part is taking a tougher stance on such helpful call center duties, without turning away legitimate customers.


  • 07 Jan 2016 12:07 PM | Anonymous member (Administrator)

    The item on encryption below may be of interest to our European INTELLENET members. It is concerns a Dutch government document on encryption and is quite informative on the subject of "backdoor" access by government. However, It is quite lengthy. - Bruce Hulme, ISPLA Director of Government Affairs


    Full translation of Dutch Government document by Matthijs R. Koot Posted on 2016-01-05 2016-01-06  

    TL;DR: on January 4th 2016, the Dutch government stated that it will, at this time, not take restrictive legal measures considering the development, availability and use of encryption within the Netherlands. Some things to keep in mind:

    • they explicitly state ‘at this time’ — the possibility remains that their position changes in the future;
    • current Dutch law provides some forms of compelled decryption: first, two provisions in intelligence law regarding targeted hacking and targeted interception (note: the law does not forbid the use of this power against a target, but for obvious reasons — e.g. maintaining operational secrecy — it seems likely it will typically only be used against third parties, for instance a provider, a roommate, etc.), and second, one provision in the code of criminal procedure (criminal law) regarding access to a secured computer (the law forbids the use of this power against a suspect because of nemo tenetur, i.e., the right to not self-incriminate);
    • in July 2015, the Dutch government proposed compelled decryption for untargeted (bulk) interception in a draft intelligence bill (intelligence law). The draft bill is currently being revised and is expected to be submitted to the House of Representatives by the end of Q1/2016. AFAIK it is expected that the final bill, that will be debated in the House of Representatives, will still include the new decryption provision. The status of the bill can be viewed here;
    • in December 2015, the Dutch government stated they cancelled the decryption provision in the final version of a cybercrime bill (more) (part of criminal law). The stated reason for cancelling: incompatibility with nemo tenetur. Why they initially introduced it — notably following a rather critical study by professor Bert-Jaap Koops — yet now cancelled it, is not clear (to me).

    On January 4th 2016, the Dutch government released a statement on encryption. It is covered by El Reg. Here is a full, unofficial translation of that statement (~1600 words; hyperlinks were added by the above translator):

    Government position on encryption

    We hereby submit the government position on encryption. This fulfills promises made during the General Meeting of the Telecom Council of June 10th 2015 (Parliamentary Papers 2014-2015, 21501-33, nr. 552) and the General Meeting of the JHA Council of October 7th 2015.


    Encryption is increasingly easy to obtain and use, and increasingly common in regular data communication. The government, the private sector and citizens increasingly use encryption to protect the confidentiality and integrity of communication and stored data. That is important for public trust in digital products and services, and for the Dutch economy, in the light of the rapidly developing digital society. At the same time, encryption obstructs access to information necessary for prosecution services and intelligence & security services when malicious persons (such as criminals and terrorists) use it. The recent attacks in Paris, where the terrorists possibly used encrypted communications, lead to the justified question what is needed to provide these services with proper insight into attack planning, and to maintain that insight.

    The duality described in the previous paragraph was also heard in the public debate in the past months about the dilemmas of the use of encryption. The House [of Representatives; i.e., the lower house] has also discussed this. During the General Meeting of the Telecom Council it was asked what the government intends to do regarding the promotion of strong encryption. Besides that, the House requested the government to establish a position on encryption.

    Next, the importance of encryption for the system and information security of the government and the private sector, and for the constitutional protection of privacy and confidential communication, will be discussed. The importance of prosecution of serious criminal offenses and the protection of national security will be laid down. Finally, after weighing of the interests, a conclusion is drawn.

    The Dutch situation can not be discussed without taking into account the international context. Software for strong encryption is increasingly available world-wide, and is already integrated in products or services. Considering the broad availability and use of advanced encryption techniques, and the cross-border nature of data traffic, options to act at a national level are limited.

    Importance of encryption for the government, private sector and citizens

    Cryptography plays a key role in technical security in the digital domain. Many cyber security measures in organizations depend strongly on the use of encryption. Secure storage of passwords, the protection of laptops against loss or theft, and the secure storage of backups are more difficult without the use of encryption. The protection of data transferred via the internet, for instance during internet banking, is only possible through the use of encryption. Due to the connectedness of systems and the global branches and various paths that communication can travel, the risk of interception, breach, access or manipulation of information and communication is always present.

    The government increasingly communicates with citizens via digital means, and provides services where confidential data is exchanged, such as the use of DigiD [a national authentication system that Dutch citizens can use to log in to the IRS, the cadastre, their municipality, etc.] or declaring taxes. As stated in the coalition agreement of 2012, citizens and companies should be able to carry out their interactions with the government entirely digitally by 2017. The government has the responsibility to ensure that confidential data is protected against access by third parties: encryption is indispensable for this. The protection of communication within the government also depends on encryption, such as the security of the exchange of diplomatic messages, and military communication.

    For companies, encryption is essential to store and transfer business information securely. The ability to use encryption strengthens the international competitiveness of the Netherlands, and promotes an attractive climate for businesses and innovation, including startups, data centers and cloud computing. Trust in secure communication and storage of data is essential for the (future) growing potential of the Dutch economy, that mainly resides in the digital economy.

    Encryption supports the protection of privacy and the confidentiality of citizens’ communications, because it provides them with a means to protect the confidentiality and integrity of personal data and communications. This is also important for exercising the right to free speech. It enables citizens, but also persons who hold an important democratic profession, such as journalists, to communicate confidentially.

    Encryption thus enables everyone to ensure the confidentiality and integrity of communication, and defend against, for instance, espionage and cyber crime. Fundamental rights and freedoms, as well as security interests and economic interests, benefit from this.

    Encryption, prosecution services and intelligence & security services

    The investigatory powers and means available to the services, must be equipped for the present and future digital reality. Effective, lawful access to data promotes the security of the digital and physical world. Encryption used by malicious persons hinders access to data by the prosecution services and intelligence & security services. The services experience these barriers for instance when they investigate the distribution and storage of child pornography, while supporting military missions abroad, while countering cyber attacks, and when they want to gain and maintain insight into terrorists who are planning attacks. Criminals, terrorists and opponents in armed conflicts are often aware that they can attract attention of the services, and also posses advanced encryption methods that are difficult to circumvent or break. The use of such methods requires little technical knowledge, because encryption is often integral part of the internet services that they too can use. That complicates, delays, or makes it impossible to gain (timely) insight in communication for the purpose of protecting national security and the purpose of prosecuting criminal offenses. Furthermore, court hearings and the providing of evidence in court for a conviction can be severely hindered.

    The right to privacy and confidentiality of citizens’ communication

    As mentioned before, the use of encryption supports citizens in ensuring privacy and confidentiality of their communication. Said lawful access to data and communication by prosecution services and intelligence & security services constitutes a breach of the confidentiality of citizens’ communication.

    Confidentiality of communication involves the constitutional protection for privacy and the right to protection of correspondence [letters, snail mail], telephone communication and telegraph communication (hereafter: ‘confidentiality of communications’). These constitutional rights are laid down in, respectively, Article 10 and Article 13 of the Dutch constitution. Besides that, these fundamental rights are laid down in Article 8 ECHR and Article 7 and Article 8 of the Charter of Fundamental Rights of the EU (insofar EU law is affected).

    The protection of constitutional rights applies to the digital world. Said constitutional regulations and international regulations provide the framework to counter unlawful breaches. Said rights are not absolute, meaning that limitations can be established insofar they meet the requirements set by the Dutch constitution and the ECHR (and insofar European Union law is affected, the EU Charter). A limitation is permissible when it serves a legitimate purpose, is established by law, and the limitation is foreseeable and cognizable [=transparent]. Furthermore, the limitation must be necessary in a democratic society. Finally, the infringement must be proportional, which means that the government’s purpose of the infringement must be proportional in relation to the infringement on the right to privacy and/or the right to confidentiality of communications.

    These requirements provide the framework for weighing the interests involved in encryption, such as the right to privacy and the right to confidentiality of communications, public and national security, and the prevention of criminal offenses. This framework, insofar it involves the special powers of the intelligence & security services, is also laid down in the Intelligence & Security Act of 2002 (‘Wiv2002’, Article 18 and Article 31). The obligations [for third parties] to cooperate with decryption laid down in the Wiv2002 (Article 24, third paragraph, and Article 25, seventh paragraph) and in the Code of Criminal Procedure (‘WvSv’, Article 126m, sixth member) can be invoked if the related special powers are exercised after such weighing.

    Discussion and conclusion

    Nowadays it is increasingly less often possible to break encryption. Furthermore, it is increasingly less often possible to demand unencrypted data from service providers. Increasingly often, modern uses of encryption mean that data is processed by the service providers only in encrypted form. Considering the importance of investigation and prosecution, and the interests involved with national security, these developments necessitate the search for new solutions.

    Currently, there is no outlook on possibilities to, in a general sense, for instance via standards, weaken encryption products without compromising the security of digital systems that use encryption. For instance by introducing a technical doorway [=backdoor, exceptional access] in an encryption product that would enable prosecution services to access encrypted files, digital systems can become vulnerable to criminals, terrorists and foreign intelligence services. This would have undesirable consequences for the security of communicated and stored information, and the integrity of IT systems, which are increasingly important to the functioning of society.

    In carrying out their legal tasks, prosecution services and intelligence & security services are partially relying on cooperation from providers of IT products and services. Given this dependence, consultation is necessary with providers regarding effective data provisioning in case of the use of their services by malicious persons, while taking into account everyone’s role and responsibilities, as well as the legal frameworks.

    Given this discussion, we draw the following conclusion:

    The government has the duty to protect the security of the Netherlands and to prosecute criminal offenses. The government emphasizes the necessity of lawful access to data and communication. Furthermore, governments, companies and citizens benefit from maximum security of digital systems. The government endorses the importance of strong encryption for internet security, for supporting the protection of citizens’ privacy, for confidential communication by the government and companies, and for the Dutch economy.

    Therefore, the government believes that at this time it is not desirable to take restricting legal measures concerning the development, availability and use of encryption within the Netherlands. The Netherlands will propagate this conclusion, and the arguments that underlie it, internationally [recall: the Netherlands chairs the EU in the first half of 2016 and focuses on, among others, the digital domain]. Regarding the promotion of strong encryption, the Minister of Economic Affairs will follow-up on the intent of the amendment (Parliamentary Papers 2015-2016, 34300 XIII, nr.10) on the budget of the Ministry of Economic Affairs [=grant EUR 500k to OpenSSL].

    (signed by the Minister of Security & Justice and the Minister of Economic Affairs)

    Further reading:

    • 2016-01-06: Wired is reporting on David Chaum’s plan to end the crypto war: PrivaTegrity, a backdoor scheme that requires cooperation between nine server administrators from nine countries. Chaum reportedly developed it “as a side project for the last two years along with a team of academic partners at Purdue, Radboud University in the Netherlands, Birmingham University and other schools”. Recall this sentence in the above translation of the Dutch gov’t statement on encryption: “Currently, there is no outlook on possibilities to, in a general sense, for instance via standards, weaken encryption products without compromising the security of digital systems that use encryption“. It is unclear (to me) whether the authors of the Dutch gov’t statement were aware of Chaum’s idea at the time they wrote that sentence. For  details on Chaum et al.’s “cMix” scheme, see cMix: Anonymization by High-Performance Scalable Mixing (.pdf, 2016).

<< First  < Prev   1   2   3   4   Next >  Last >> 


Powered by Wild Apricot. Try our all-in-one platform for easy membership management