Showing posts with label mysql. Show all posts
Showing posts with label mysql. Show all posts

Saturday, November 21, 2015

Program 5: Python

Here is your final coding assignment, due December 4th by end of day.

Goal:


You will be implementing a query interface for a database of World Series results using Python. Your code should read the search data from a form, retrieve the data from a mysql database and output the results of the search into an html formatted table. (Hint: You can use Python's CGI library for the form handling.)

We have provided the html source of a simple search form and the SQL code for creating the table in MySQL. The database table is structured as follows:
    Year (Primary Key)
    Winning Team
    Winning Manager
    League of Winning Team
    Games
    Losing Team
    Losing Manager

Resources:

Grading:
  • 10% Good turnin
  • form handling
  • database connection
  • correct output
     
Deliverables:
    • Python source file(s)
    • Readme with any references

    Sunday, December 16, 2012

    Find: Web Served, part 4: Get your database on

    Web Served, part 4: Get your database on

    For new readers just joining us, this is the fourth in a series of articles on getting your hands dirty by setting up a personal Web server and some popular Web applications. We've chosen a Linux server and Nginx as our operating system and Web server, respectively; we've given it the capability to serve encrypted pages; and we've added the capability to serve PHP content via PHP-FPM. Most popular Web apps, though, require a database to store some or all of their content, and so the next step is to get one spun up.

    But which database? There are many, and every single one of them has its advantages and disadvantages. Ultimately we're going to go with the MySQL-compatible replacement MariaDB, but understanding why we're selecting this is important.

    To SQL or NoSQL, that is the question

    In most cases these days, when someone says "database" they're talking about a relational database, which is a collection of different sets of data, organized into tables. An individual record in a database is stored as a row in a table of similar records—for example, a table in a business's database might contain all of that business's customers, with each record consisting of the customer's first name, last name, and a customer identification number. Another table in this database might contain the states where the customers live, with each row consisting of a customer's ID number and the state associated with it. A third table might contain all the items every customer has ordered in the past, with each record consisting of a unique order number, the ID of the customer who ordered it, and the date of the order. In each example, the rows of the table are the records, and the columns of the table are the fields each record is made of.

    Saturday, November 10, 2012

    Find: Get started at no cost with a faster, larger Cloud SQL database

    Free 100gb MySQL DBs on google. 

    Get started at no cost with a faster, larger Cloud SQL database

    Author Photo
    By Joe Faith, Product Manager

    Cross-posted with the Official Google Enterprise Blog

    You want your applications to be fast, even with millions of users. Anytime your user tries to retrieve information from the app or update settings, it should happen instantly. For the best performance, you need faster, larger databases - especially if you have a growing user base to serve.

    Google App Engine is designed to scale. And now Google Cloud SQL—a MySQL database that lives in Google’s cloud—has new features to meet the demand for faster access to more data. With today’s updates, you can now work with bigger, faster MySQL databases in the cloud:


    • More Storage: We’re increasing the available storage on Cloud SQL to 100GB – ten times more than what used to be available.

    • Faster Reads: We’re increasing the maximum size of instances to 16GB RAM, a 4 times increase in the amount of data you can cache.


    • Faster Writes: We’re adding functionality for optional asynchronous replication, which gives the write performance of a non-replicated database, but the availability of a replicated one.


    • EU datacenter availability: Now you can choose to store your data and run your Cloud SQL database instance in either our US or EU data centers.

    • Integration with Google Apps Script: We’re making it quick and easy for businesses using Google Apps to use Cloud SQL. Publish and share data with Google Sheets, add data to Google Sites pages or create simple Google Forms without worrying about hosting or configuring servers. 




    Introducing a new trial offer 

    Many of you have requested a trial offer to test out Cloud SQL. Today, we’re introducing a 6- month trial offer at no charge, effective until June 1, 2013. This will include one Cloud SQL instance with 0.5 GB of storage. Sign up now and get started on Cloud SQL at no cost.

    Joe Faith is a Product Manager on the Google Cloud Team. In a previous life he was a researcher in machine learning, bioinformatics, and information visualization, and was founder of charity fundraising site Fundraising Skills.

    Posted by

    Wednesday, August 15, 2012

    Find: MySQL at Twitter

    MySQL at Twitter

    MySQL is the persistent storage technology behind most Twitter data: the interest graph, timelines, user data and the Tweets themselves. Due to our scale, we push MySQL a lot further than most companies. Of course, MySQL is open source software, so we have the ability to change it to suit our needs. Since we believe in sharing knowledge and that open source software facilitates innovation, we have decided to open source our MySQL work on GitHub under the BSD New license.

    The objectives of our work thus far has primarily been to improve the predictability of our services and make our lives easier. Some of the work we’ve done includes:

    • Add additional status variables, particularly from the internals of InnoDB. This allows us to monitor our systems more effectively and understand their behavior better when handling production workloads.
    • Optimize memory allocation on large NUMA systems: Allocate InnoDB's buffer pool fully on startup, fail fast if memory is not available, ensure performance over time even when server is under memory pressure.
    • Reduce unnecessary work through improved server-side statement timeout support. This allows the server to proactively cancel queries that run longer than a millisecond-granularity timeout.
    • Export and restore InnoDB buffer pool in using a safe and lightweight method. This enables us to build tools to support rolling restarts of our services with minimal pain.
    • Optimize MySQL for SSD-based machines, including page-flushing behavior and reduction in writes to disk to improve lifespan.
    We look forward sharing our work with upstream and other downstream MySQL vendors, with a goal to improve the MySQL community. For a more complete look at our work, please see the change history and documentation.

    If you want to learn more about our usage of MySQL, we will be speaking about Gizzard, our sharding and replication framework on top of MySQL, at the Percona Live MySQL Conference and Expo on April 12th. Finally, contact us on GitHub or file an issue if you have questions.

    On behalf of the Twitter DBA and DB development teams,

    - Jeremy Cole (@jeremycole)

    - Davi Arnaut (@darnaut)