Results 1 to 4 of 4

Thread: How to install CUDA Nvidia server AWS Amazon

  1. #1
    Join Date
    2015-Jun
    Posts
    2

    How to install CUDA Nvidia server AWS Amazon

    Hello everyone!

    It has long tormented and can not seem to install the driver on Servais CUDA
    They do not appear on buildings

    root@ip-172-31-29-87:/home# nvidia-smi -q

    Code:
    ==============NVSMI LOG==============
    
    Timestamp                           : Sun Jun 21 17:15:58 2015
    Driver Version                      : 340.29
    
    Attached GPUs                       : 1
    GPU 0000:00:03.0
        Product Name                    : GRID K520
        Product Brand                   : Grid
        Display Mode                    : Disabled
        Display Active                  : Disabled
        Persistence Mode                : Disabled
        Accounting Mode                 : Disabled
        Accounting Mode Buffer Size     : 128
        Driver Model
            Current                     : N/A
            Pending                     : N/A
        Serial Number                   : 0321014026571
        GPU UUID                        : GPU-1530b20a-fd8a-6b8d-a35e-ad3d7cfbf027
        Minor Number                    : 0
        VBIOS Version                   : 80.04.D4.00.03
        MultiGPU Board                  : No
        Board ID                        : 0x3


    Code:
    root@ip-172-31-29-87:/home# pyrit list_cores
    Pyrit 0.4.0 (C) 2008-2011 Lukas Lueg http://pyrit.googlecode.com
    This code is distributed under the GNU General Public License v3+
    
    The following cores seem available...
    #1:  'CPU-Core (SSE2)'
    #2:  'CPU-Core (SSE2)'
    #3:  'CPU-Core (SSE2)'
    #4:  'CPU-Core (SSE2)'
    #5:  'CPU-Core (SSE2)'
    #6:  'CPU-Core (SSE2)'
    #7:  'CPU-Core (SSE2)'
    #8:  'CPU-Core (SSE2)'
    root@ip-172-31-29-87:/home#

    Code:
    root@ip-172-31-29-87:/home# lspci
    00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
    00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
    00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]
    00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 01)
    00:02.0 VGA compatible controller: Cirrus Logic GD 5446
    00:03.0 VGA compatible controller: NVIDIA Corporation GK104GL [GRID K520] (rev a1)
    00:1f.0 Unassigned class [ff80]: XenSource, Inc. Xen Platform Device (rev 01)
    root@ip-172-31-29-87:/home#


    where what are the instructions?
    Or do not get to customize your CUDA driver with such a configuration?

  2. #2
    Join Date
    2014-Jul
    Posts
    16
    I know this is a year old, but I am finally putting a script together (or a vagrant script) to spin up the necessary components (pyrit, crunch, CUDA, drivers, and python) to get a template to start using the VM at AWS. I will share it when I get it done. I think we will be able to run 8 character (upper and lower case and digit) 8 number words as a password feed... But I an still tinkering. No body on the web is very consistent in their approach.

  3. #3
    Join Date
    2015-Jun
    Posts
    2
    Quote Originally Posted by crypts3c View Post
    I know this is a year old, but I am finally putting a script together (or a vagrant script) to spin up the necessary components (pyrit, crunch, CUDA, drivers, and python) to get a template to start using the VM at AWS. I will share it when I get it done. I think we will be able to run 8 character (upper and lower case and digit) 8 number words as a password feed... But I an still tinkering. No body on the web is very consistent in their approach.

    Hey. I'm in touch, so what is a script for setting CUDA

  4. #4
    Join Date
    2014-Jul
    Posts
    16
    Hey... I see this is a year old, but I finally got around to experimenting with this and I too immediately thought vagrant would be the best means of provisioning an AMI for cloud cracking.

    I have read a lot on various methods, but one interesting variation I came across is to build a "distributed" AMI, with multiple instances. This way you could ditch the dictionary approach, and simply pipe in a Crunch generated word list that would typically take up several terabytes or more of space...

    I'm working on this now, but if the OP is still working or has a vagrant script or even another strategy, I'd love to discuss....

Similar Threads

  1. Replies: 1
    Last Post: 2017-04-10, 14:21
  2. Install Nvidia Cuda and Pyrit
    By silverwrath in forum How-To Archive
    Replies: 18
    Last Post: 2017-01-12, 22:45
  3. Not able to install cuda/Nvidia for crunch
    By TitoOP in forum General Archive
    Replies: 1
    Last Post: 2016-09-07, 11:53

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •