2019 Review Tools and Software Supporting Semantic Reasoning

Abstruse

Over the past 10 years, the adoption of cloud services has grown chop-chop, leading to the introduction of automated deployment tools to accost the scale and complication of the infrastructure companies and users deploy. Without the aid of automation, ensuring the security of an ever-increasing number of deployments becomes more and more challenging. To the best of our cognition, no formal automated technique currently exists to verify deject deployments during the design phase. In this example written report, we show that Description Logic modeling and inference capabilities can be used to improve the safety of cloud configurations. We focus on the Amazon Web Services (AWS) proprietary declarative linguistic communication, CloudFormation, and develop a tool to encode template files into logic. We query the resulting models with backdrop related to security posture and study on our findings. By extending the models with dataflow-specific knowledge, we use more comprehensive semantic reasoning to farther support security reviews. When applying the developed toolchain to publicly available deployment files, we discover numerous violations of widely-recognized security best practices, which suggests that streamlining the methodologies developed for this case study would be beneficial.

Introduction

The term Infrastructure as Code (IaC) refers to the practice of configuring, provisioning, and updating systems resources from source code files, which are compiled into atomic instructions and then executed to deploy the desired architecture [29]. The advantage of treatment code, instead of manually provisioning resources, lies in the capability to use version control systems, orchestration frameworks, and automated testing tools as part of the deployment procedure. In addition to instructions relevant for resources cosmos, dependencies, and updates, IaC configuration files comprise data nigh settings, dataflow, and admission command. In a time when cloud companies provide customers with simple-to-launch, albeit extremely powerful infrastructure, information technology is crucial to automatically and provably verify the security of such systems.

In this report, nosotros investigate IaC deployment frameworks and how these are formally modeled and reasoned upon. We explore the usage of description logics (DLs) as a conceptual-modeling formalism that is expressive, decidable, and equipped with mature tooling. We argue that formal reasoning techniques applied to deployment templates are an immensely valuable tool for developers and security engineers by substantially aiding the automation of time-consuming security reviews; helping them to detect complex logical errors at earlier stages; and, containing the costs that finding and fixing security problems at later stages would cause. As the prevalence of cloud infrastructure increases, in improver to experts, automated reasoning tools could do good inexperienced users as well.

System Studied. We focus on the Amazon Web Services proprietary IaC tool, CloudFormation, the first to be introduced at a big calibration, over x years agone. AWS, cloud provider inside Amazon, serves millions of customers worldwide. These include private businesses as well as regime, education, nonprofit, and healthcare organizations. While the deject provider is responsible for the faithful deployment of the customers' desired configurations, it is the customer'southward duty to brand sure that these comply with the security requirements of their business context. Few management tools of this scale exist. Notable mentions are Terraform [37], Microsoft Azure'due south Resources Manager [28], Google Cloud'southward Deployment Manager [xix], and the recently introduced OASIS standard TOSCA [6].

Goal of Report. Our goal is to improve the quality of the security analyses that are performed over IaC configurations pre-deployment; and by doing so, their overall security. With this study, nosotros investigate the application of description logics to the formalization and reasoning over IaC deployments. In particular, we are interested in three aspects: (i) whether proposed deject configurations comply with security best practices, (ii) how to help customers in building more secure infrastructure before deploying information technology, and (iii) to what extent formal automated techniques can support manual pre-deployment security reviews.

Challenges. Footling research has been done so far on the possibility to formalize IaC languages, and no research has been done to devise a logic that is well-suited to reason nigh cloud infrastructure. By nature, cloud infrastructure interacts with an open up surround that is, at best, only partially known. In particular, external-facing APIs and users participate in these interactions. By design, cloud services allow for the limerick of smaller components into large infrastructure, the complexity of which creates a challenge with respect to security. Our models should capture the connectivity of resources, the flow of information that spans across multiple paths, and the rich security-related information available in IaC configuration files. This is further complicated by the demand for a query language for verification and falsification, able to express that mitigations must be present (vs. may be absent-minded), and security issues must be absent (vs. may be nowadays). Chiefly, we need practical tools that support the implementation of all these parts and that can scale to real-globe IaC configurations.

Our Contribution. We provide a framework to encode IaC into description logic, and investigate its effectiveness in answering configuration queries and reasoning almost dataflow, trust boundaries, and potential problems inside the organisation. Specifically, we examination DLs reasoning capabilities to infer new facts most underspecified resource (such as those not included in a given deployment but used by it) and leverage DLs open-world supposition to perform verification and refutation, depending on the property beingness checked. We formalize boosted security knowledge that allows for checking system-level semantic properties; i.e., properties that consider the nature of the cloud environment and more complex reachability over an inferred graph representation of the infrastructure.

Throughout the written report, we make four novel contributions: (i) the formalization and logical encoding of AWS CloudFormation (Sect. 3); (two) a technique to express security backdrop (Sect. 4); (iii) the experimental evaluation of encoding and query times, accounting for the nearly common security issues that we found over publicly available IaC templates (Sect. 5); and (iv) an extension that enables semantic dataflow reasoning (Sect. 6). Our tool is implemented in Scala and available online [xiv]. We include preliminaries in Sect. 2; discuss related piece of work in Sect. 7; and conclude in Sect. 8.

Preliminaries

Description Logics. DLs are a family unit of logics well suited to model relationships betwixt entities. They provide the logical foundation of the well-known Web Ontology Language [twenty, 23, 32], for which extensive tool support exists (e.g., the Protégé editor and off-the-shelf reasoners such as FaCT, HermiT, and Pellet [18, 30, 36, 39]). We innovate the description logic \(\mathcal {ALC} \) [1, 24, 34], Attributive Logic with Complement, and two additional features that are relevant for our study. \(\mathcal {ALC} \) formulae are congenital from symbols from the alphabets \(N_C\), of atomic concept names; \(N_R\), of function names; and \(N_I\), of private names. These are the DL equivalents of FOL unary predicates, binary predicates, and constants, respectively. \(\mathcal {ALC} \) concept expressions are built according to the grammar:

$$\begin{aligned} C,D\,{:}{:}\!=&\ \bot \mid \tiptop \mid \mathsf A \mid \lnot C \mid C \sqcup D \mid C \sqcap D \mid \exists \mathsf r.C \mid \forall \mathsf r.C \cease{aligned}$$

where \(\mathsf A\) is an atomic concept from the set \(N_C\); C,D are possibly complex concepts; and \(\mathsf r\) is a role from the alphabet \(N_R\). Terminological noesis is represented via general concept inclusion axioms\(C\!\sqsubseteq \!\!D\). As an instance, in the remainder of this paper we will refer to two standard axioms that enforce the domain and range of binary relations: \(\mathsf {dom}(r,C)\equiv \exists r.\acme \!\sqsubseteq \!C \) and \(\mathsf {ran}(r,C)\equiv \exists r^-.\top \!\sqsubseteq \!C\). Assertional knowledge is represented via concept assertions\(\mathsf {C}(a)\) and part assertions \(\mathsf {r}(a,b)\). In this newspaper, we will use three additional operators: changed roles, functionality constraints, and circuitous function inclusions. The first, denoted \(r^-\), encodes the converse of the binary human relationship r. The second enforces binary relationships to be functional. The third, written \(r \circ south\!\sqsubseteq \!t\), establishes that the chaining of the two relationships r and south implies the relationship t, and tin exist used to implement transitivity (when \(r\!=\!south\!=\!t\)). A model of a DL knowledge base is an estimation \(\mathcal {I} \), over a domain \(\varDelta \), that satisfies all the axioms and assertions contained and implied by the knowledge base. For the purpose of our application, we leverage two classical inference problems: satisfiability and example retrieval, whose full definitions are found in standard textbooks [2, 3].

AWS CloudFormation. AWS CloudFormation, \(\mathsf {cfn}\), provides users with a declarative programming language and a framework to provision and manage over 500 resource spread beyond 70 services [15]. Footnote 1 Services are products such as storage, databases, and processors, and their interface is implemented through resources, which are the actual modules that users declare and deploy. Their annunciation is washed by writing one or more so-called CloudFormation Templates (JSON-formatted configuration files). Within a template, users configure settings and communication of the desired resource instances. Equally an instance, let us consider one of the about widely known storage products within AWS: the Unproblematic Storage Service \(\mathsf {S3}\) (besides illustrated in Listings 1.1 and i.2). The CloudFormation interface for \(\mathsf {S3}\) consists of two resources: \(\mathsf {S3}{::}\mathsf {Saucepan}\) and \(\mathsf {S3}{::}\mathsf {BucketPolicy}\). A \(\mathsf {Bucket}\) is a single unit of storage whose backdrop include encryption, replication, and logging settings, which can exist viewed as the bucket's own configuration parameters. They could besides be references to other resources that are connected to the current ane, eastward.chiliad., the unique ID of another bucket where logs are stored. A \(\mathsf {BucketPolicy}\) is a resources that links an admission control policy to a bucket. All the properties that tin exist instantiated and the structure of resource-types such as \(\mathsf {S3}{::}\mathsf {Saucepan}\) and \(\mathsf {S3}{::}\mathsf {BucketPolicy}\) are given in the CloudFormation Resource Specification [fifteen]. The resource specification is a collection of files that prescribe resource properties and their allowed values. Provided that a configuration file is valid with respect to the specifications, an IaC deployment environment compiles it into instructions that are so executed to provision the requested resources in the correct dependency order and with the desired settings.

figure a

Formalization and Encoding of IaC Deployments

While setting upwardly this instance written report, we found information technology convenient to come upward with a formalization, of both IaC resource specifications and IaC configuration files, to employ as an intermediate representation during the encoding process. This was also needed since we could not find suitable inquiry in the area (although some preliminary research on IaC formalization does be: e.m., the PhD thesis in [12]). As mentioned in Sect. 2, users consult the resource specifications to find out what fields and values are allowed when declaring a resource. Intuitively, these provide a sort of type-system, or JSON schema, against which configuration files must validate. Configuration files contain the resource declarations of the instances that the user wishes to deploy. Allow the states illustrate this with some examples. Listing 1.1 shows a snippet of the \(\mathsf {S3}{::}\mathsf {Bucket}\) resource-type specification. In addition to the principal resources type, the specification includes definitions for its subproperties, their types, and whether these are required. Although the example only shows string properties, in general, immune properties values range over objects, arrays, and archaic types such equally integers, doubles, longs, strings, and booleans. Listing ane.ii, on the other mitt, shows a common usage scenario of the \(\mathsf {S3}\) storage service, where a bucket with bones configuration is used to store the desired data. The instance has logical ID ConfigS3Bucket, is of type \(\mathsf {S3}{::}\mathsf {Bucket}\), and specifies two acme-level properties, BucketName and LoggingConfiguration. It is like shooting fish in a barrel to see that this case declaration validates against the resource specification of Listing i.1. This snippet is taken from one of the benchmark deployments evaluated in Sect. 5 (StackSet 15) and, incidentally, it violates a security all-time practice: "no bucket should shop its own logs." Such formalization has been instrumental to capture infrastructure configurations, resource settings and inter-connections, and to precisely and automatically encode it into DL.

figure b

Encoding. We translate IaC specifications into DL terminological knowledge, and IaC configurations into assertional knowledge. The conceptual modeling features needed to model the quondam include axioms to define domain and range of properties, requiredness, and functionality. These give us plenty expressivity to infer qualities of nodes that are underspecified, such as those that are referenced past a template simply not declared in it (e.g., already deployed and running elsewhere), whose configuration is unknown. To give readers an intuition of the encoding procedure, let us await at the equation below, which contains some of the axioms and assertions generated by the translation of the lawmaking in Listings 1.1 and ane.ii.

$$\begin{aligned} \textit{Spec}_{\mathsf {S3{:}{:}Bucket}}=\{&\ \mathsf {dom(bucketName,Saucepan)},\ \mathsf {ran(bucketName,}\ \textit{String}\mathsf {)},\\&\ \mathsf {(Funct\ bucketName)}, \ ...,\ \mathsf {dom(destinationBucket,LOGCONFIG)},\\&\ \mathsf {ran(destinationBucket,Bucket)},\ ...\ \} \cease{aligned}$$

figure c

Security Properties Specification

We group backdrop into three categories that reflect their loftier-level meaning: security issues, mitigations, and global protections to security concerns. We view these in illustration to must and may specifications, which one would use to limited that an issue may be present (vs. must exist absent) or that a protection must be in identify (vs. may exist missing). Each property type is matched to a corresponding query structure, which aids the translation of security requirements into formal specifications and implements different neglect/pass logics. Queries are written as description logic expressions whose outcome can be one of UNSAT, SAT with no example found (SAT/0), and SAT with instances (Sabbatum/+). These are achieved by running a satisfiability check, perchance followed past an instance retrieval call.

Mitigations are configurations of single resources that reduce the likelihood of a security issue. In order to laissez passer, these checks must be verified. Examples are:

  1. M1

    "All buckets must continue logs,"

  2. M2

    "Merely buckets that host websites can have a public preset ACL," and

  3. M3

    "Data stores must have backup or versioning enabled."

Security Issues are configurations that potentially increase exposure to security concerns. In society to pass, these checks must be falsified. Examples are:

  1. I1

    "There may be a bucket that is non encrypted,"

  2. I2

    "Encrypted bucket that sends events to a not-encrypted queue," and

  3. I3

    "There may be a networking component that opens all ports to all."

Global Protections are more general mitigations, practical on single resources or as configuration patterns, whose presence and proper configuration ensures protection over the system as a whole. Examples are:

  1. P1

    "There is an warning configured to perform an action when triggered," and

  2. P2

    "In that location is a configuration recorder logging changes to the infrastructure."

We refer the reader to the repository in [14] for the properties specification files. Footnote 2

Application to Existing Infrastructure

We now hash out the awarding of our approach to real-earth IaC deployments. We analyze AWS CloudFormation specification and configuration files, showing that the approach is practical, scalable, and identifies potential security problems.

Operation of the Tool. Nosotros develop a tool that performs three main tasks. Showtime, the encoding of the \(\mathsf {cfn}\) resource specifications into formal models (Resources Terminologies). Footnote iii 2d, the encoding of the actual \(\mathsf {cfn}\) configuration files, also chosen StackSet, into formal models (Infrastructure Model). Third, inference and query answering for a ready of predefined queries. We apply the OWLApi [22] for the encoding phase, and JFact [39] as the inference engine.

Experimental Setup. Nosotros run our tool on 15 CloudFormation StackSets openly available on GitHub. Regarding metrics, we define the infrastructure size equally the numbers of both alleged resources (N) and their types (\(N_{RT}\)). The latter determines which resource terminologies are imported into the final encoded model and thus influences its size, measured in number of logical axioms (\(N_\alpha \)). The smallest StackSet has 6 resource and half-dozen resources types, the largest has 508 resources and 21 resource types. We implement 50 properties from the ScoutSuite collection [35] that are applicable at design fourth dimension and, thus, over IaC deployment files. Of the 50 backdrop, 29 are mitigations, 18 are security issues, and 3 are global protections. Nosotros behave our evaluation on an Intel Core i5 with 16 GB RAM and perform warmup runs and articulate the heap before each measurement. This tuning helps to minimize the impact of just-in-time compilation and to reduce the likelihood of garbage collection during the measured benchmark runs.

Table 1. Evaluation results (mean times in millisec).

Full size table

Results Evaluation. The average compilation time of the entire \(\mathsf {cfn}\) resources specifications (542 files) was 940 ms. Table 1 reports the results of our experimental evaluation. StackSets are sorted by number of resources. For each, we measure the time taken by the stackset encoding (ENC), inference (INF), and query answering task (grouped by outcome: UNSAT, SAT with no instances, and SAT with instances). As nosotros tin can run across from the table, the encoding fourth dimension increases with the infrastructure's size, producing larger models that require longer inference times. Average query answering times increase accordingly. UNSAT queries accept shorter boilerplate answering times than those evaluating to Sabbatum/0 or SAT/+ (UNSAT proofs are constitute before a SAT outcome can be deduced). In addition, once a query is proved SAT, we invoke a procedure for instances retrieval to determine whether satisfying instances are present or non. The specific infrastructure configuration and its size are the main influencing factors of query answering times. Considering that the boilerplate template has near fifty–100 resource, and templates having 100–500 resources are rare, the results suggest that our approach scales to real-world IaC templates. For case, StackSet 04 has 132 resources, is encoded in 363 ms, classified in 2.1 s, and has a max average per-query time of 162 ms. Bold a pool of 100 checks to be run, the automated modeling and verification of such an infrastructure would have, in the worst-case, effectually 18 s.

Found Security Issues

Beyond all fifteen deployments, we run xv\(\times \) 50 = 750 checks: 608 pass and 142 fail. Of the 142 failing checks, 73 do non return any case and 69 return 1 or more instances (i.due east., they fail with a Sat/+ effect). Such a divergence is due to the nature of the single bank check and its definition of failure. A global protection check fails when no instance implementing the protection is found; a security consequence check fails whenever is possible (SAT/0 or SAT/+); and a mitigation check fails when no example is institute. We consider SAT/+ findings particularly of import, as they do non just witness a potential security outcome but also an bodily misconfiguration. In item, the 69 SAT/+-declining checks fail on 239 resource instances, with the virtually institute bug being:

$$\begin{aligned} \textit{Missing or misconfigured encryption }&\;\, 131 \\ \textit{Missing or misconfigured logging }&\;\, 46 \\ \textit{Missing or misconfigured versioning/backup/replication }&\;\, 44 \\ \textit{Missing User password reset requirement }&\;\, 12 \\ \textit{Misconfigured authorization }&\;\, 3 \\ \textit{Misconfigured networking configuration }&\;\, 3 \stop{aligned}$$

The 73 findings returning no instances fall into 2 groups: the absence of whatever monitoring or alarming system is very frequent, as is the dependency on external resources whose security posture cannot exist assessed.

$$\begin{aligned} \textit{Absent global monitoring/alarming/logging protection }&\;\, 41\\ \textit{Usage of external resource with unknown configuration }&\;\, 32 \end{aligned}$$

Fig. ane.
figure 1

Sample template: accounts prod (left) and examination (correct).

Total size image

Semantic Reasoning Near Dataflows

To conclude our study, nosotros manually craft ii proof-of-concept models of terms related to cloud security (ontologies). We apply these to extend the formalization of the CloudFormation IaC specification that was automatically generated by our tool. Such domain-specific ontologies formalize several common deject terms, such as account, deployment, authenticated and unauthenticated users; generic dataflow terms, such as storage, process, nodes, and flows of different kind; and service-specific dataflow terms. By adding these on top of the underlying IaC formal specification, we tin can reason about the higher-level business logic and reachability of the infrastructure, and we can abstract it and visualize information technology in a more than convenient way. This is where the full inference ability of description logics comes into play. Such an inference ability would be hard to achieve with an alternative encoding (e.grand., using a modal logic). Let us illustrate how this technique is applied to organisation-level analyses of interest for a security review: dataflow and trust boundary analyses. A trust boundary is a portion of a system whose components trust each other and where information can securely menses. Multiple trust boundaries may be inside one system. Dataflows that travel across boundaries may introduce security issues and should be carefully reviewed. In Fig. i, we meet an example of such a situation, where the infrastructure is deployed across two accounts, prod and test, sharing resources AccessLog and AccessTopic. In our encoding, nosotros use the so-called DLs inclusion axioms to rewrite properties that (when chained) imply the existence of a more full general relation and to infer boosted characteristics of nodes. For example, in the post-obit listing axioms 2–seven formalize the relationships of "logging to" and "sending notifications to" a resource, which imply the being of a transitive dataflow between nodes; and axioms 8–9 permit to infer that the node devs@mail is an external node.

$$\begin{aligned} \mathsf {LoggingConfig} \circ \mathsf {DestinationBucket}&\sqsubseteq \mathsf {logsTo}\end{aligned}$$

(1)

$$\begin{aligned} \mathsf {TopicArn^-} \circ \mathsf {Endpoint}&\sqsubseteq \mathsf {sendsNotifications}\end{aligned}$$

(ii)

$$\begin{aligned} \mathsf {NotificationConfig} \circ \mathsf {TopicConfig}\circ \mathsf {Topic}&\sqsubseteq \mathsf {sendsNotifications}\end{aligned}$$

(3)

$$\begin{aligned} \mathsf {logsTo}&\sqsubseteq \mathsf {dataflow}\end{aligned}$$

(4)

$$\begin{aligned} \mathsf {sendsNotifications}&\sqsubseteq \mathsf {dataflow}\end{aligned}$$

(v)

$$\begin{aligned} \mathsf {dataflow}\circ \mathsf {dataflow}&\sqsubseteq \mathsf {dataflow}\end{aligned}$$

(half dozen)

$$\brainstorm{aligned} \exists \mathsf {Protocol}.\{``\mathtt {email}"\}&\sqsubseteq \forall \mathsf {Endpoint}.\mathsf {EmailAddress}\stop{aligned}$$

(7)

$$\begin{aligned} \mathsf {EmailAddress}&\sqsubseteq \mathsf {ExternalNode} \end{aligned}$$

(8)

Fig. 2.
figure 2

Dataflow extracted from Fig. 1

Full size epitome

This encoding enables us to compute a succinct dataflow diagram from the reasoned IaC configuration (see Fig. ii), and to formally verify properties that ordinarily crave a manual analysis of the infrastructure and its underlying graph representation. E.g., the question, "tin can data catamenia from the customer-information bucket to the exterior?" tin can at present exist formalized every bit a DL formula and, using a reasoning engine, the existence of a dataflow that starts on the customer-information bucket and reaches the devs@postal service node can now exist inferred. We note that, due to the structure of the TopicSubscription resource, this dataflow could not accept been detected with unproblematic reachability assay on a graph built without the assist of semantic reasoning. Moreover, the dataflow diagram highlights some other potential source of information leakage: testers being exposed to customer access information. This needs to be mitigated past enforcing the proper trust boundaries, in particular, past adding a dedicated access log storage for client-data bucket in the prod business relationship.

Related Work

To the all-time of our noesis, the problem of formally verifying the design of a deject infrastructure in its entirety has non been addressed before. Formal reasoning techniques accept been successfully applied to dissimilar aspects of the cloud, e.k. networks and admission policies [4, v, 7, 16]. Non-formal tools exist that recommend and run checks against already deployed resources [xiii, 35], or scan IaC templates [10, eleven, 38] for syntactical patterns violating security all-time practices. These checks overlap considerably and can be expressed in our framework equally well. The disadvantages of such tools are that checks are local to single components, tin can be performed only mail service-deployment, need complex configurations, access permissions, or even manual interaction. The CFn-Linter [ten] has a rule-based component that users can extend with custom syntax checks, merely none of the rules currently bachelor focus on security. The CFn-nag linting tool [eleven] checks compliance to all-time practices only locally to the single resource; eastward.g., it cannot detect issues such as "at that place is an events queue, receiving from a bucket with critical functionality, that may non be encrypted" or "there might be a user that is shared by multiple policies" (which would get against the to the lowest degree privilege principle); as well equally including in its analysis external resource that are referenced by the template being linted.

Regarding our choice of logic, large-scale configuration problems accept been tackled with description logic before [26, 27]. Simpler first-order logic formulas with operators to stand for object-oriented interface relationships could be used to model IaC specifications. Even so, such an encoding would simply partially solve our trouble, which is more complex because our overall goal is to do formal semantic analyses (eastward.g., dataflow and threat modeling). Semantic-based approaches, even DL-based, are being used to exercise conceptual modeling of security engineers' expertise with the provable and explainable inference capabilities of logics. Equally an instance, nosotros refer the reader to the OWASP "Ontology-driven Threat Modeling" projection [31] that aims at the formalization of security-related knowledge in the context of different types of computer systems past means of description logic ontologies. In contrast to logic programming languages, such every bit Datalog, DLs inherently support functionality axioms and the existence of anonymous individuals within a domain that is causeless to be open. These are supported out-of-the-box without the need for an additional, more complex, axiomatization or encoding. In particular, nosotros took advantage of DL'due south open-world assumption to implement, in our properties encoding, verification and falsification. Another culling to DLs as a modeling language would be to employ 3-valued models with labels on states and transitions and employ model checking [8, 9]. Nonetheless, expressive branching-time logics [25, 33] have not been studied in the context of 3-valued models and we are also not aware of tool back up at the level available for DLs (cf. [17, 21]).

Conclusion and Time to come Work

Throughout this case study, we investigated the usage of description logics-based semantic reasoning to evaluate the security of deject infrastructure pre-deployment. Nosotros encoded Amazon Web Services' Infrastructure equally Code specifications and configurations into description logic models and verified the presence and absence of potential security issues. We showed how this approach enables deeper organisation-level analyses such as dataflow assay. All results tin be generalized to other existing IaC tools. While working on this projection, nosotros interacted with developers on two occasions. First, for the benchmark templates used in our experimental evaluation, we contacted the owners, told them virtually the misconfigurations, and discussed potential security implications. 2d, within AWS, security engineers use a technique based on this newspaper for security reviews of AWS products earlier they are launched, helping developers fix real issues pre-deployment. In the process, we received valuable feedback that we used for improving precision and reducing the number of false-positive results. Nosotros program to continue researching for an fifty-fifty ameliorate-plumbing equipment clarification logic formalism, query language, three-valued semantics, and decision procedures for verification and falsification of backdrop relevant to security analyses, such as dataflows, trust boundaries, and threat modeling.

Notes

  1. 1.

    Equally of Baronial 2020, exact number is Region-dependent.

  2. 2.
  3. three.

References

  1. Baader, F., Calvanese, D., McGuinness, D.50., Nardi, D., Patel-Schneider, P.F. (eds.): The Clarification Logic Handbook: Theory, Implementation, and Applications. Cambridge University Printing (2003)

    Google Scholar

  2. Baader, F., Horrocks, I., Lutz, C., Sattler, U.: An Introduction to Clarification Logic. Cambridge University Printing (2017)

    Google Scholar

  3. Baader, F., Horrocks, I., Sattler, U.: Description logics. In: Handbook of Cognition Representation, Foundations of Artificial Intelligence, vol. 3, pp. 135–179. Elsevier (2008)

    Google Scholar

  4. Backes, J., et al.: Reachability assay for AWS-based networks. In: Dillig, I., Tasiran, S. (eds.) CAV 2019. LNCS, vol. 11562, pp. 231–241. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-25543-5_14

    CrossRef  Google Scholar

  5. Backes, J., et al.: Semantic-based automated reasoning for AWS access policies using SMT. In: FMCAD, pp. 1–9. IEEE (2018)

    Google Scholar

  6. Binz, T., Breitenbücher, U., Kopp, O., Leymann, F.: TOSCA: portable automated deployment and management of cloud applications. In: Bouguettaya, A., Sheng, Q., Daniel, F. (eds.) Advanced Web Services, pp. 527–549. Springer, New York (2014). https://doi.org/x.1007/978-1-4614-7535-4_22

  7. Bouchenak, South., Chockler, G.V., Chockler, H., Gheorghe, G., Santos, Due north., Shraer, A.: Verifying cloud services: present and futurity. Operating Syst. Rev. 47(ii), six–nineteen (2013)

    CrossRef  Google Scholar

  8. Bruns, G., Godefroid, P.: Model checking fractional state spaces with 3-valued temporal logics. In: Halbwachs, Northward., Peled, D. (eds.) CAV 1999. LNCS, vol. 1633, pp. 274–287. Springer, Heidelberg (1999). https://doi.org/10.1007/3-540-48683-6_25

    CrossRef  Google Scholar

  9. Bruns, G., Godefroid, P.: Model checking with multi-valued logics. In: Díaz, J., Karhumäki, J., Lepistö, A., Sannella, D. (eds.) ICALP 2004. LNCS, vol. 3142, pp. 281–293. Springer, Heidelberg (2004). https://doi.org/x.1007/978-three-540-27836-8_26

    CrossRef  Google Scholar

  10. The AWS CloudFormation Linter (2020). https://github.com/aws-cloudformation/cfn-python-lint. Accessed fifteen Oct 2020

  11. The CFnNag Linting Tool (2020). https://github.com/stelligent/cfn_nag. Accessed xv Oct 2020

  12. Challita, S.: Inferring models from Cloud APIs and reasoning over them: a tooled and formal arroyo. (Inférer des modèles à partir d'APIs deject et raisonner dessus: une approche outillée et formelle). Ph.D. thesis, Lille University of Science and Applied science, French republic (2018)

    Google Scholar

  13. Infrastructure Security, Compliance, and Governance (2020). http://www.cloudconformity.com/. Accessed 04 Aug 2020

  14. CloudFORMAL: Image Implementation. http://github.com/claudiacauli/CloudFORMAL. Accessed 15 Oct 2020

  15. Resources Specification (2020). https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-resource-specification.html. Accessed 13 Aug 2020

  16. Cook, B.: Formal reasoning almost the security of Amazon web services. In: Chockler, H., Weissenbacher, Thousand. (eds.) CAV 2018. LNCS, vol. 10981, pp. 38–47. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-96145-3_3

    CrossRef  Google Scholar

  17. D'Ippolito, N., Fischbein, D., Chechik, M., Uchitel, South.: MTSA: the modal transition organization analyser. In: ASE, pp. 475–476. IEEE Computer Gild (2008)

    Google Scholar

  18. Glimm, B., Horrocks, I., Motik, B., Stoilos, G., Wang, Z.: Hermit: An OWL 2 reasoner. J. Autom. Reason. 53(3), 245–269 (2014)

    CrossRef  Google Scholar

  19. Google Deployment Manager. https://cloud.google.com/deployment-manager. Accessed 28 January 2021

  20. Grau, B.C., Horrocks, I., Motik, B., Parsia, B., Patel-Schneider, P.F., Sattler, U.: OWL 2: the next pace for OWL. J. Spider web Semant. 6(four), 309–322 (2008)

    CrossRef  Google Scholar

  21. Gurfinkel, A., Wei, O., Chechik, Yard.: Yasm: a software model-checker for verification and refutation. In: Brawl, T., Jones, R.B. (eds.) CAV 2006. LNCS, vol. 4144, pp. 170–174. Springer, Heidelberg (2006). https://doi.org/ten.1007/11817963_18

    CrossRef  Google Scholar

  22. Horridge, Grand., Bechhofer, S.: The OWL API: a Java API for OWL ontologies. Semant. Web two(ane), 11–21 (2011)

    CrossRef  Google Scholar

  23. Horrocks, I., Patel-Schneider, P.F., van Harmelen, F.: From SHIQ and RDF to OWL: the making of a web ontology linguistic communication. J. Web Semant. i(1), 7–26 (2003)

    CrossRef  Google Scholar

  24. Krötzsch, M., Simancik, F., Horrocks, I.: A clarification logic primer. CoRR abs/1201.4089 (2012)

    Google Scholar

  25. Kupferman, O., Grumberg, O.: Purchase ane, get 1 free!!! J. Log. Comput. vi(4), 523–539 (1996)

    MathSciNet  CrossRef  Google Scholar

  26. McGuinness, D.L., Resnick, L.A., Isbell, C.Fifty., Jr.: Description logic in practice: a classic application. In: IJCAI, pp. 2045–2046. Morgan Kaufmann (1995)

    Google Scholar

  27. McGuinness, D.50., Wright, J.R.: Conceptual modelling for configuration: a description logic-based approach. AI EDAM 12(4), 333–344 (1998)

    Google Scholar

  28. Microsoft Azure Resource Director (2020). https://azure.microsoft.com/en-us/features/resource-manager/. Accessed 28 January 2021

  29. Morris, One thousand.: Infrastructure equally Lawmaking: Managing Servers in the Deject. O'Reilly Media, Inc. (2016)

    Google Scholar

  30. Musen, Yard.A.: The protégé project: a look back and a look forward. AI Matters 1(4), four–12 (2015)

    CrossRef  Google Scholar

  31. OWASP Ontology-driven Threat Modeling. https://github.com/OWASP/OdTM. Accessed 14 May 2021

  32. Patel-Schneider, P., Grau, B.C., Motik, B.: OWL 2 spider web ontology language straight semantics (2nd edition). W3C recommendation, W3C (December 2012). http://world wide web.w3.org/TR/2012/REC-owl2-direct-semantics-20121211/

  33. Sattler, U., Vardi, Chiliad.Y.: The hybrid \({\mu }\)-calculus. In: Goré, R., Leitsch, A., Nipkow, T. (eds.) IJCAR 2001. LNCS, vol. 2083, pp. 76–91. Springer, Heidelberg (2001). https://doi.org/10.1007/3-540-45744-5_7

    CrossRef  Google Scholar

  34. Schmidt-Schauß, M., Smolka, Grand.: Attributive concept descriptions with complements. Artif. Intell. 48(1), 1–26 (1991)

    MathSciNet  CrossRef  Google Scholar

  35. Multi-cloud Security Auditing Tool (2020). http://github.com/nccgroup/ScoutSuite. Accessed 4 Aug 2020

  36. Sirin, Eastward., Parsia, B., Grau, B.C., Kalyanpur, A., Katz, Y.: Pellet: a practical OWL-DL reasoner. J. Web Semant. v(two), 51–53 (2007)

    CrossRef  Google Scholar

  37. Terraform. https://www.terraform.io/. Accessed 28 Jan 2021

  38. Static Analysis Security Scanner for Terraform (2020). https://tfsec.dev/. Accessed 10 May 2021

  39. Tsarkov, D., Horrocks, I.: FaCT++ description logic reasoner: organization clarification. In: Furbach, U., Shankar, Due north. (eds.) IJCAR 2006. LNCS (LNAI), vol. 4130, pp. 292–297. Springer, Heidelberg (2006). https://doi.org/x.1007/11814771_26

    CrossRef  Google Scholar

Download references

Acknowledgements

This research is supported by the ERC consolidator grant D-SynMA under the Eu's Horizon 2020 research and innovation programme (grant agreement No. 772459) and by Amazon Spider web Services.

Writer information

Affiliations

Rights and permissions

Open Access This chapter is licensed under the terms of the Artistic Eatables Attribution four.0 International License (http://creativecommons.org/licenses/by/four.0/), which permits utilize, sharing, adaptation, distribution and reproduction in whatsoever medium or format, equally long as yous give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and betoken if changes were made.

The images or other third political party material in this chapter are included in the chapter's Artistic Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended apply is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission straight from the copyright holder.

Reprints and Permissions

Copyright information

© 2021 The Writer(s)

About this paper

Verify currency and authenticity via CrossMark

Cite this paper

Cauli, C., Li, Chiliad., Piterman, Northward., Tkachuk, O. (2021). Pre-deployment Security Assessment for Deject Services Through Semantic Reasoning. In: Silva, A., Leino, K.R.G. (eds) Calculator Aided Verification. CAV 2021. Lecture Notes in Computer science(), vol 12759. Springer, Cham. https://doi.org/10.1007/978-3-030-81685-8_36

Download citation

  • .RIS
  • .ENW
  • .BIB
  • DOI : https://doi.org/x.1007/978-iii-030-81685-8_36

  • Published:

  • Publisher Name: Springer, Cham

  • Impress ISBN: 978-three-030-81684-1

  • Online ISBN: 978-3-030-81685-8

  • eBook Packages: Computer Science Computer Science (R0)

spellmanrouresing77.blogspot.com

Source: https://link.springer.com/chapter/10.1007/978-3-030-81685-8_36

0 Response to "2019 Review Tools and Software Supporting Semantic Reasoning"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel